Hard to Vary or Hardly Usable?
This could be a promising approach to formalize HTV:
https://x.com/FZdyb/status/2051352500582641931
https://github.com/deoxyribose/hard_to_vary_posterior_predictive
It’s AI generated, so not eligible for the bounty. And I’m not familiar enough with the probability calculus to evaluate it. But bookmarking it here for the future.
(One of my first criticisms was that HTV has nothing to do with likelihood, which the author granted but addressed by saying it maps onto marginal likelihood. See https://x.com/FZdyb/status/2051004605601898561 and surrounding discussion.)
We can redefine ‘hard to vary’, but we’d need still a working implementation in the form of computer code.
… Demeter scores 25% and axial tilt scores 100%.
Now do this universally, for any given theory.
We can redefine ‘hard to vary’, but we’d need still a working implementation in the form of computer code.
… Demeter scores 25% and axial tilt scores 100%.
Now do this universally, for any given theory.
#4940·Dennis HackethalOP, 7 days agoWe can redefine ‘hard to vary’, but we’d need still a working implementation in the form of computer code.
… Demeter scores 25% and axial tilt scores 100%.
Now do this universally, for any given theory.
By the way Knut, when I go straight into ‘criticism mode’, that can sound cold or harsh. But don’t let that discourage you from exploring your idea further. Maybe you’re onto something! A working implementation of hard to vary would be useful and vindicating.
#4936·Knut Sondre Sæbø revised 7 days agoHave some thoughts, which might be way off. But interested in your response. It seems to me that "hard-to-vary" is itself the criterion that a theory should be as programmable as possible. As you note, the goal of a theory should be to make it as explicit as possible, and a program is explicitness in its most complete form. Any theory with ambiguous components automatically has a breaking point that is changeable which is hard to detect. A programmable theory has strict causal relations all the way from the axioms to the prediction, which makes any change to the components detectable. In other words: a theory is hard to vary to the extent that its components and the couplings between them can be specified as a program. If a theory is vague, you cannot tell when it has been varied.
This might give a concrete operationalization. A breaking point is any place in the formalization where the chain stops being programmable: a primitive with no implementable type, a coupling between components that cannot be turned into a function, or just a step that requires implicit theories to fill the explanatory gaps. A mathematical theory with no remaining gaps has zero breaking points and is maximally hard to vary. A theory in natural language is already worse, because words carry ambiguity and vary from mind to mind. This does not rule out better and worse theories in natural language, since we can use more or less ambiguous words and relations. But it does create a hierarchy of hard-to-vary explanations, where the share of the explanation that is programmable, or at least unambiguous, forms the basis for measuring the "hard-to-vary" criterion.
This is probably too crude a formalization. But evaluating the two theories of Demeter's emotions and axial tilt as explanations, you could check how much of each is programmable. Detecting seasons is programmable in both cases through temperature and changes in weather. Demeter's emotions and the causal link from them to the weather, which is the entire explanation, are not programmable. In the axial tilt theory, every component is. So on this measure Demeter scores 25% and axial tilt scores 100%.
We can redefine ‘hard to vary’, but we’d need still a working implementation in the form of computer code.
… Demeter scores 25% and axial tilt scores 100%.
Now do this universally, for any given theory.
Have some thoughts, which might be way off. But interested in your response. It seems to me that "hard-to-vary" is itself the criterion that a theory should be as programmable as possible. As you note, the goal of a theory should be to make it as explicit as possible, and a program is explicitness in its most complete form. Any theory with ambiguous components automatically has a breaking point that is changeable without detection. A programmable theory has strict causal relations all the way from the axioms to the prediction, which makes any change to the components detectable. In other words: a theory is hard to vary to the extent that its components and the couplings between them can be specified as a program. If a theory is vague, you cannot tell when it has been varied.
This gives a concrete operationalization. A breaking point is any place in the formalization where the chain stops being programmable: a primitive with no implementable type, a coupling between components that cannot be specified, or a step that requires implicit theories to fill the explanatory gaps. A mathematical theory with no remaining gaps has zero breaking points and is maximally hard to vary. A theory in natural language is already worse, because words carry ambiguity and vary from mind to mind. This does not rule out better and worse theories in natural language, since we can use more or less ambiguous words and relations. But it does create a hierarchy of hard-to-vary explanations, where the share of the explanation that is programmable, or at least unambiguous, forms the basis for the criterion.
This is probably too crude a formalization. But evaluating the two theories of Demeter's emotions and axial tilt as explanations, you could check how much of each is programmable. Detecting seasons is programmable in both cases through temperature and changes in weather. Demeter's emotions and the causal link from them to the weather, which is the entire explanation, are not programmable. In the axial tilt theory, every component is. So on this measure Demeter scores 25% and axial tilt scores 100%.
Have some thoughts, which might be way off. But interested in your response. It seems to me that "hard-to-vary" is itself the criterion that a theory should be as programmable as possible. As you note, the goal of a theory should be to make it as explicit as possible, and a program is explicitness in its most complete form. Any theory with ambiguous components automatically has a breaking point that is changeable which is hard to detect. A programmable theory has strict causal relations all the way from the axioms to the prediction, which makes any change to the components detectable. In other words: a theory is hard to vary to the extent that its components and the couplings between them can be specified as a program. If a theory is vague, you cannot tell when it has been varied.
This might give a concrete operationalization. A breaking point is any place in the formalization where the chain stops being programmable: a primitive with no implementable type, a coupling between components that cannot be turned into a function, or just a step that requires implicit theories to fill the explanatory gaps. A mathematical theory with no remaining gaps has zero breaking points and is maximally hard to vary. A theory in natural language is already worse, because words carry ambiguity and vary from mind to mind. This does not rule out better and worse theories in natural language, since we can use more or less ambiguous words and relations. But it does create a hierarchy of hard-to-vary explanations, where the share of the explanation that is programmable, or at least unambiguous, forms the basis for measuring the "hard-to-vary" criterion.
This is probably too crude a formalization. But evaluating the two theories of Demeter's emotions and axial tilt as explanations, you could check how much of each is programmable. Detecting seasons is programmable in both cases through temperature and changes in weather. Demeter's emotions and the causal link from them to the weather, which is the entire explanation, are not programmable. In the axial tilt theory, every component is. So on this measure Demeter scores 25% and axial tilt scores 100%.
It seems to me that "hard-to-vary" is itself the criterion that a theory should be as programmable as possible. As you note, the goal of a theory should be to make it as explicit as possible, and a program is explicitness in its most complete form. Any theory with ambiguous components automatically has a breaking point that is changeable without detection. A programmable theory has strict causal relations all the way from the axioms to the prediction, which makes any change to the components detectable. In other words: a theory is hard to vary to the extent that its components and the couplings between them can be specified as a program. If a theory is vague, you cannot tell when it has been varied.
This gives a concrete operationalization. A breaking point is any place in the formalization where the chain stops being programmable: a primitive with no implementable type, a coupling between components that cannot be specified, or a step that requires implicit theories to fill the explanatory gaps. A mathematical theory with no remaining gaps has zero breaking points and is maximally hard to vary. A theory in natural language is already worse, because words carry ambiguity and vary from mind to mind. This does not rule out better and worse theories in natural language, since we can use more or less ambiguous words and relations. But it does create a hierarchy of hard-to-vary explanations, where the share of the explanation that is programmable, or at least unambiguous, forms the basis for the criterion.
This is probably too crude a formalization. But evaluating the two theories of Demeter's emotions and axial tilt as explanations, you could check how much of each is programmable. Detecting seasons is programmable in both cases through temperature and changes in weather. Demeter's emotions and the causal link from them to the weather, which is the entire explanation, are not programmable. In the axial tilt theory, every component is. So on this measure Demeter scores 25% and axial tilt scores 100%.
Have some thoughts, which might be way off. But interested in your response. It seems to me that "hard-to-vary" is itself the criterion that a theory should be as programmable as possible. As you note, the goal of a theory should be to make it as explicit as possible, and a program is explicitness in its most complete form. Any theory with ambiguous components automatically has a breaking point that is changeable without detection. A programmable theory has strict causal relations all the way from the axioms to the prediction, which makes any change to the components detectable. In other words: a theory is hard to vary to the extent that its components and the couplings between them can be specified as a program. If a theory is vague, you cannot tell when it has been varied.
This gives a concrete operationalization. A breaking point is any place in the formalization where the chain stops being programmable: a primitive with no implementable type, a coupling between components that cannot be specified, or a step that requires implicit theories to fill the explanatory gaps. A mathematical theory with no remaining gaps has zero breaking points and is maximally hard to vary. A theory in natural language is already worse, because words carry ambiguity and vary from mind to mind. This does not rule out better and worse theories in natural language, since we can use more or less ambiguous words and relations. But it does create a hierarchy of hard-to-vary explanations, where the share of the explanation that is programmable, or at least unambiguous, forms the basis for the criterion.
This is probably too crude a formalization. But evaluating the two theories of Demeter's emotions and axial tilt as explanations, you could check how much of each is programmable. Detecting seasons is programmable in both cases through temperature and changes in weather. Demeter's emotions and the causal link from them to the weather, which is the entire explanation, are not programmable. In the axial tilt theory, every component is. So on this measure Demeter scores 25% and axial tilt scores 100%.
I'm not a programmer, so the code below is 100% AI-generated. But here is an attempt. If we normalize a theory into the parts that can be put on a computer, the types it uses, the nodes (specific values) it commits to, and the functions between them, we can score the theory by how many of those parts run.
from dataclasses import dataclass
@dataclass
class Item:
name: str
kind: str # "type", "node", or "function"
programmable: bool # does it compile and run?
@dataclass
class Theory:
name: str
items: list[Item]
def score(self) -> float:if not self.items:return 0.0ok = sum(1 for i in self.items if i.programmable)return ok / len(self.items)
def compare(a: Theory, b: Theory) -> None:
print(f"{a.name:<20} {a.score():.0%}")
print(f"{b.name:<20} {b.score():.0%}")
Demeter theory
demeter = Theory("Demeter", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Goddess", "type", False),
Item("Emotion", "type", False),
Item("Demeter", "node", False),
Item("emotionstate", "node", False),
Item("emotionat", "function", False),
Item("emotion_weather", "function", False),
])
Axial tilt theory
axialtilt = Theory("Axial tilt", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Angle", "type", True),
Item("Insolation", "type", True),
Item("axialtilt", "node", True),
Item("solarconstant", "node", True),
Item("solarangle", "function", True),
Item("insolation_at", "function", True),
Item("temperature", "function", True),
])
compare(demeter, axial_tilt)
Output metrics:
Demeter 25%
Axial tilt 100%
If we normalize a theory into the parts that can be put on a computer, the types it uses, the nodes (specific values) it commits to, and the functions between them, we can score the theory by how many of those parts run.
The program goes through each item in a theory, counts how many are marked True, and divides by the total. That fraction is the score of how hard the theory is to vary.
An item is True if it can be put on a computer, either by reusing an existing type (Float, Int) or by defining a new one that compiles. It is False if no working type system can express it. The user fills in the labels; the program just counts.
Demeter: 2 of 8 items program (Latitude and Temperature). Demeter, her emotions, and the functions linking them to weather can't. So the score is 25%.
Axial tilt: 9 of 9 items program. Standard types, measured constants, and functions from standard physics. So the score is 100%.
If we normalize a theory into the parts that can be put on a computer, the types it uses, the nodes (specific values) it commits to, and the functions between them, we can score the theory by how many of those parts actually run.
from dataclasses import dataclass
@dataclass
class Item:
name: str
kind: str # "type", "node", or "function"
programmable: bool # does it compile and run?
@dataclass
class Theory:
name: str
items: list[Item]
def score(self) -> float:if not self.items:return 0.0ok = sum(1 for i in self.items if i.programmable)return ok / len(self.items)
def compare(a: Theory, b: Theory) -> None:
print(f"{a.name:<20} {a.score():.0%}")
print(f"{b.name:<20} {b.score():.0%}")
Demeter theory
demeter = Theory("Demeter", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Goddess", "type", False),
Item("Emotion", "type", False),
Item("Demeter", "node", False),
Item("emotionstate", "node", False),
Item("emotionat", "function", False),
Item("emotion_weather", "function", False),
])
Axial tilt theory
axialtilt = Theory("Axial tilt", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Angle", "type", True),
Item("Insolation", "type", True),
Item("axialtilt", "node", True),
Item("solarconstant", "node", True),
Item("solarangle", "function", True),
Item("insolation_at", "function", True),
Item("temperature", "function", True),
])
compare(demeter, axial_tilt)
Output metrics:
Demeter 25%
Axial tilt 100%
I'm not a programmer, so the code below is 100% AI-generated. But here is an attempt. If we normalize a theory into the parts that can be put on a computer, the types it uses, the nodes (specific values) it commits to, and the functions between them, we can score the theory by how many of those parts run.
from dataclasses import dataclass
@dataclass
class Item:
name: str
kind: str # "type", "node", or "function"
programmable: bool # does it compile and run?
@dataclass
class Theory:
name: str
items: list[Item]
def score(self) -> float:if not self.items:return 0.0ok = sum(1 for i in self.items if i.programmable)return ok / len(self.items)
def compare(a: Theory, b: Theory) -> None:
print(f"{a.name:<20} {a.score():.0%}")
print(f"{b.name:<20} {b.score():.0%}")
Demeter theory
demeter = Theory("Demeter", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Goddess", "type", False),
Item("Emotion", "type", False),
Item("Demeter", "node", False),
Item("emotionstate", "node", False),
Item("emotionat", "function", False),
Item("emotion_weather", "function", False),
])
Axial tilt theory
axialtilt = Theory("Axial tilt", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Angle", "type", True),
Item("Insolation", "type", True),
Item("axialtilt", "node", True),
Item("solarconstant", "node", True),
Item("solarangle", "function", True),
Item("insolation_at", "function", True),
Item("temperature", "function", True),
])
compare(demeter, axial_tilt)
Output metrics:
Demeter 25%
Axial tilt 100%
#4931·Knut Sondre Sæbø, 7 days agoIt seems to me that "hard-to-vary" is itself the criterion that a theory should be as programmable as possible. As you note, the goal of a theory should be to make it as explicit as possible, and a program is explicitness in its most complete form. Any theory with ambiguous components automatically has a breaking point that is changeable without detection. A programmable theory has strict causal relations all the way from the axioms to the prediction, which makes any change to the components detectable. In other words: a theory is hard to vary to the extent that its components and the couplings between them can be specified as a program. If a theory is vague, you cannot tell when it has been varied.
This gives a concrete operationalization. A breaking point is any place in the formalization where the chain stops being programmable: a primitive with no implementable type, a coupling between components that cannot be specified, or a step that requires implicit theories to fill the explanatory gaps. A mathematical theory with no remaining gaps has zero breaking points and is maximally hard to vary. A theory in natural language is already worse, because words carry ambiguity and vary from mind to mind. This does not rule out better and worse theories in natural language, since we can use more or less ambiguous words and relations. But it does create a hierarchy of hard-to-vary explanations, where the share of the explanation that is programmable, or at least unambiguous, forms the basis for the criterion.
This is probably too crude a formalization. But evaluating the two theories of Demeter's emotions and axial tilt as explanations, you could check how much of each is programmable. Detecting seasons is programmable in both cases through temperature and changes in weather. Demeter's emotions and the causal link from them to the weather, which is the entire explanation, are not programmable. In the axial tilt theory, every component is. So on this measure Demeter scores 25% and axial tilt scores 100%.
If we normalize a theory into the parts that can be put on a computer, the types it uses, the nodes (specific values) it commits to, and the functions between them, we can score the theory by how many of those parts actually run.
from dataclasses import dataclass
@dataclass
class Item:
name: str
kind: str # "type", "node", or "function"
programmable: bool # does it compile and run?
@dataclass
class Theory:
name: str
items: list[Item]
def score(self) -> float:if not self.items:return 0.0ok = sum(1 for i in self.items if i.programmable)return ok / len(self.items)
def compare(a: Theory, b: Theory) -> None:
print(f"{a.name:<20} {a.score():.0%}")
print(f"{b.name:<20} {b.score():.0%}")
Demeter theory
demeter = Theory("Demeter", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Goddess", "type", False),
Item("Emotion", "type", False),
Item("Demeter", "node", False),
Item("emotionstate", "node", False),
Item("emotionat", "function", False),
Item("emotion_weather", "function", False),
])
Axial tilt theory
axialtilt = Theory("Axial tilt", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Angle", "type", True),
Item("Insolation", "type", True),
Item("axialtilt", "node", True),
Item("solarconstant", "node", True),
Item("solarangle", "function", True),
Item("insolation_at", "function", True),
Item("temperature", "function", True),
])
compare(demeter, axial_tilt)
Output metrics:
Demeter 25%
Axial tilt 100%
#3069·Dennis HackethalOP revised 6 months agoMy critique of David Deutsch’s The Beginning of Infinity as a programmer. In short, his ‘hard to vary’ criterion at the core of his epistemology is fatally underspecified and impossible to apply.
Deutsch says that one should adopt explanations based on how hard they are to change without impacting their ability to explain what they claim to explain. The hardest-to-change explanation is the best and should be adopted. But he doesn’t say how to figure out which is hardest to change.
A decision-making method is a computational task. He says you haven’t understood a computational task if you can’t program it. He can’t program the steps for finding out how ‘hard to vary’ an explanation is, if only because those steps are underspecified. There are too many open questions.
So by his own yardstick, he hasn’t understood his epistemology.
You will find that and many more criticisms here: https://blog.dennishackethal.com/posts/hard-to-vary-or-hardly-usable
It seems to me that "hard-to-vary" is itself the criterion that a theory should be as programmable as possible. As you note, the goal of a theory should be to make it as explicit as possible, and a program is explicitness in its most complete form. Any theory with ambiguous components automatically has a breaking point that is changeable without detection. A programmable theory has strict causal relations all the way from the axioms to the prediction, which makes any change to the components detectable. In other words: a theory is hard to vary to the extent that its components and the couplings between them can be specified as a program. If a theory is vague, you cannot tell when it has been varied.
This gives a concrete operationalization. A breaking point is any place in the formalization where the chain stops being programmable: a primitive with no implementable type, a coupling between components that cannot be specified, or a step that requires implicit theories to fill the explanatory gaps. A mathematical theory with no remaining gaps has zero breaking points and is maximally hard to vary. A theory in natural language is already worse, because words carry ambiguity and vary from mind to mind. This does not rule out better and worse theories in natural language, since we can use more or less ambiguous words and relations. But it does create a hierarchy of hard-to-vary explanations, where the share of the explanation that is programmable, or at least unambiguous, forms the basis for the criterion.
This is probably too crude a formalization. But evaluating the two theories of Demeter's emotions and axial tilt as explanations, you could check how much of each is programmable. Detecting seasons is programmable in both cases through temperature and changes in weather. Demeter's emotions and the causal link from them to the weather, which is the entire explanation, are not programmable. In the axial tilt theory, every component is. So on this measure Demeter scores 25% and axial tilt scores 100%.
#3926·Dennis HackethalOP, 4 months ago@liberty-fitz-claridge says (#3885) it’d be implausible for HTV to be justificationist since that would contradict the rest of Deutsch’s anti-justificationist philosophy.
I don’t think that alone means my interpretation of HTV is implausible. We’re bound to find contradictions eventually. In a good book like BoI, they’re just rare, so when we do find them, they go against the bulk of the philosophy.
#3721·Dennis HackethalOP, 4 months agoFrom my article:
Isn’t the assignment of positive scores, of positive reasons to prefer one theory over another, a kind of justificationism? Deutsch criticizes justificationism throughout The Beginning of Infinity, but isn’t an endorsement of a theory as ‘good’ a kind of justification?
@liberty-fitz-claridge says (#3885) it’d be implausible for HTV to be justificationist since that would contradict the rest of Deutsch’s anti-justificationist philosophy.
#3862·Bart Vanderhaegen revised 4 months agoSo my criticism is that the HTV criterion is not a computational task (but a principle, universal statement) and Deutsch's criterion of understanding (you need a program) only applies to computational tasks.
With principle/ universal statement/ theory, I mean for example: for all masses, there is a force proportional to the inverse square of their distances/ for all integers, addition is commutative/ for all species, their evolution is governed by variation and selection, for all interpretations of moral actions, they are moral relativism when ... applies to that interpretation/ ....
- Principles/ universal statements/ theories are not computable because they speak about sets of (possible) transformations (not 1 in particular which would be a computation) and they offer a constraining criterion to those transformations in the set.
- Whereas a computer program is an abstraction capable of causing 1 particular transformation (between sets of inputs and sets of outputs)
There may be a way to quantify HTV, and thus deal with specific evaluations of how HTV of one theory is higher than another. That would be a computational task. But that is different from the criterion for HTV (which is by definition not computable). And having no program for that computational task does not imply that the criterion for HTV is irrelevant or not usable, or even fluff.
Compare for example to the theory of evolution: the theory of "variation and selection" is the criterion for a set of allowable transformations (of species), but not having a specific program (e.g. for how a particular species can evolve in some particular niche) does not imply that the criterion is useless or fluff.
I think the usefulness of the HTV criterion becomes clear when you link it to Constructor Theory, then one can argue that HTV criterion adds more than criticisms alone can do. But that's a whole other story we could get into.
HTV isn’t a principle even by your own definition. What on earth are you talking about man.
Even if HTV itself is not a computational task, the decision-making method Deutsch proposes is one, and it depends on HTV. But even if we sidestep that issue and outsource HTV completely to the user, we still run into all kinds of issues. This has all been addressed. No fancy talk about sets or constraints is going to change that.
You previously claimed you’re an engineer. I don’t think you are. You just pasted some code that was clearly written by AI and didn’t even compile, twice.
You talk about ‘sets’ and ‘constraints’ and ‘computations’ but I don’t think you understand any of them. No offense but I think those concepts are all distractions so you don’t need to actually address HTV. That’s why you need to use those big words.
Discussing with you is a waste of time. Again, no offense but I don’t think you’re qualified to weigh in on this discussion. Prove me wrong and submit working, handwritten code for HTV or Deutsch’s decision-making method. I’ll delete any further comments from you in this discussion that don’t contain working code. If you keep commenting anyway, I’ll lock your account.
#3862·Bart Vanderhaegen revised 4 months agoSo my criticism is that the HTV criterion is not a computational task (but a principle, universal statement) and Deutsch's criterion of understanding (you need a program) only applies to computational tasks.
With principle/ universal statement/ theory, I mean for example: for all masses, there is a force proportional to the inverse square of their distances/ for all integers, addition is commutative/ for all species, their evolution is governed by variation and selection, for all interpretations of moral actions, they are moral relativism when ... applies to that interpretation/ ....
- Principles/ universal statements/ theories are not computable because they speak about sets of (possible) transformations (not 1 in particular which would be a computation) and they offer a constraining criterion to those transformations in the set.
- Whereas a computer program is an abstraction capable of causing 1 particular transformation (between sets of inputs and sets of outputs)
There may be a way to quantify HTV, and thus deal with specific evaluations of how HTV of one theory is higher than another. That would be a computational task. But that is different from the criterion for HTV (which is by definition not computable). And having no program for that computational task does not imply that the criterion for HTV is irrelevant or not usable, or even fluff.
Compare for example to the theory of evolution: the theory of "variation and selection" is the criterion for a set of allowable transformations (of species), but not having a specific program (e.g. for how a particular species can evolve in some particular niche) does not imply that the criterion is useless or fluff.
I think the usefulness of the HTV criterion becomes clear when you link it to Constructor Theory, then one can argue that HTV criterion adds more than criticisms alone can do. But that's a whole other story we could get into.
You criticized your own idea. Presumably that’s not what you meant to do.
#3858·Bart Vanderhaegen, 4 months agoBecause relative criteria are fine to posit and not justificationist. We can propose criteria that claim that explanation A is better than explanation B without that being justificationism
From BoI chapter 1 glossary:
The misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.
That says nothing about absolute vs relative. Stop making up stuff.
#3860·Bart Vanderhaegen revised 4 months agoThat's because a good explanation for Deutsch is not an explanation with good points, but an explanation that is harder to vary compared to any other explanation. So again relative to other explanations.
The word "good" is indeed misleading in that sense, but he clearly qualifies it as performing better, relative to other explanations, on his HTV criterion, and not as: the explanation having scored high points.
with good points
I didn’t say the explanation doesn’t make good points, I said the explanation doesn’t get points.
So my criticism is that the HTV criterion is not a computational task (but a principle, universal statement) and Deutsch's criterion of understanding (you need a program) only applies to computational tasks.
With principle/ universal statement/ theory, I mean for example: for all masses, there is a force proportional to the inverse square of their distances/ for all integers, addition is commutative/ for all species, their evolution is governed by variation and selection, for all interpretations of moral actions, these are moral relativistic one/ ....
- Principles/ universal statements/ theories are not computable because they speak about sets of (possible) transformations (not 1 in particular which would be a computation) and they offer a constraining criterion to those transformations in the set.
- Whereas a computer program is an abstraction capable of causing 1 particular transformation (between sets of inputs and sets of outputs)
There may be a way to quantify HTV, and thus deal with specific evaluations of how HTV of one theory is higher than another. That would be a computational task. But that is different from the criterion for HTV (which is by definition not computable). And having no program for that computational task does not imply that the criterion for HTV is irrelevant or not usable, or even fluff.
Compare for example to the theory of evolution: the theory of "variation and selection" is the criterion for a set of allowable transformations (of species), but not having a specific program (e.g. for how a particular species can evolve in some particular niche) does not imply that the criterion is useless or fluff.
I think the usefulness of the HTV criterion becomes clear when you link it to Constructor Theory, then one can argue that HTV criterion adds more than criticisms alone can do. But that's a whole other story we could get into.
So my criticism is that the HTV criterion is not a computational task (but a principle, universal statement) and Deutsch's criterion of understanding (you need a program) only applies to computational tasks.
With principle/ universal statement/ theory, I mean for example: for all masses, there is a force proportional to the inverse square of their distances/ for all integers, addition is commutative/ for all species, their evolution is governed by variation and selection, for all interpretations of moral actions, they are moral relativism when ... applies to that interpretation/ ....
- Principles/ universal statements/ theories are not computable because they speak about sets of (possible) transformations (not 1 in particular which would be a computation) and they offer a constraining criterion to those transformations in the set.
- Whereas a computer program is an abstraction capable of causing 1 particular transformation (between sets of inputs and sets of outputs)
There may be a way to quantify HTV, and thus deal with specific evaluations of how HTV of one theory is higher than another. That would be a computational task. But that is different from the criterion for HTV (which is by definition not computable). And having no program for that computational task does not imply that the criterion for HTV is irrelevant or not usable, or even fluff.
Compare for example to the theory of evolution: the theory of "variation and selection" is the criterion for a set of allowable transformations (of species), but not having a specific program (e.g. for how a particular species can evolve in some particular niche) does not imply that the criterion is useless or fluff.
I think the usefulness of the HTV criterion becomes clear when you link it to Constructor Theory, then one can argue that HTV criterion adds more than criticisms alone can do. But that's a whole other story we could get into.
That's because a good explanation for Deutsch is not an explanation with good points, but an explanation that is harder to vary compared to any other explanation. So again relative to other explanations.
The word "good" is indeed misleading in that sense, but he clearly qualifies it as performing better, relative to other explanations, on his HTV criterion, and as the explanation having scored high points.
That's because a good explanation for Deutsch is not an explanation with good points, but an explanation that is harder to vary compared to any other explanation. So again relative to other explanations.
The word "good" is indeed misleading in that sense, but he clearly qualifies it as performing better, relative to other explanations, on his HTV criterion, and not as: the explanation having scored high points.
#3835·Bart Vanderhaegen, 4 months agoYes, the criterion for democracy is not a computational task, but an abstraction that constrains computational tasks. In the same way: the criterion for HTV is also not a computational task, it constrains the possible computational tasks that attempt to quantify HTV.
We understand computational tasks by being able to program them (as per Deutsch' criterion). But we understand criteria/ principles/ axioms/ theories ... (non computational tasks) in another way: by varying them and eliminating the variants that do not solve the problem the principle purported to solve.
For example:
a+b=b+a (in arithmetic) is a principle/ axiom that we understand by elimination of possible variants (a+b =/= a ... a+b =/=b ... etc)
but 3+5=5+3 is a specific transformation that should be understood via a computational task: adding 5 to 3 and then 3 to 5 and comparing both outcomes, via a program.
So my criticism is that the HTV criterion is not a computational task (but a principle, universal statement) and Deutsch's criterion of understanding (you need a program) only applies to computational tasks.
With principle/ universal statement/ theory, I mean for example: for all masses, there is a force proportional to the inverse square of their distances/ for all integers, addition is commutative/ for all species, their evolution is governed by variation and selection, for all interpretations of moral actions, these are moral relativistic one/ ....
- Principles/ universal statements/ theories are not computable because they speak about sets of (possible) transformations (not 1 in particular which would be a computation) and they offer a constraining criterion to those transformations in the set.
- Whereas a computer program is an abstraction capable of causing 1 particular transformation (between sets of inputs and sets of outputs)
There may be a way to quantify HTV, and thus deal with specific evaluations of how HTV of one theory is higher than another. That would be a computational task. But that is different from the criterion for HTV (which is by definition not computable). And having no program for that computational task does not imply that the criterion for HTV is irrelevant or not usable, or even fluff.
Compare for example to the theory of evolution: the theory of "variation and selection" is the criterion for a set of allowable transformations (of species), but not having a specific program (e.g. for how a particular species can evolve in some particular niche) does not imply that the criterion is useless or fluff.
I think the usefulness of the HTV criterion becomes clear when you link it to Constructor Theory, then one can argue that HTV criterion adds more than criticisms alone can do. But that's a whole other story we could get into.
#3838·Dennis HackethalOP, 4 months agoSo it is a relative claim about an explanation, relative to another, not versus some absolute criterion of goodness.
So what? I didn’t mention an absolute criterion. My original criticism already applies to both relative and absolute criteria of quality (what you call “goodness”).
Because relative criteria are fine to posit and not justificationist. We can propose criteria that claim that explanation A is better than explanation B without that being justificationism
#3837·Dennis HackethalOP, 4 months agoSimilar to a crucial test …
But that’s exactly where HTV differs from Popper. Popper doesn’t give a theory points when it survives a crucial test. HTV does. From BoI chapter 1:
… testable explanations that have passed stringent tests become extremely good explanations …
That's because a good explanation for Deutsch is not an explanation with good points, but an explanation that is harder to vary compared to any other explanation. So again relative to other explanations.
The word "good" is indeed misleading in that sense, but he clearly qualifies it as performing better, relative to other explanations, on his HTV criterion, and as the explanation having scored high points.
#3836·Bart Vanderhaegen, 4 months agoThe criterion for HTV applied to 2 explanation is not justificationism I think. It allows to say explanation A is better than explanation B, which is equivalent to: explanation B is worse than explanation A. So it is a relative claim about an explanation, relative to another, not versus some absolute criterion of goodness. Similar to a crucial test (e.g. Eddington): we refute Newton's theory and keep Einsteins, that is not a claim about the goodness of Einsteins theory, that theory merely has survived, it has not gotten "goodness points". It could be refuted always later on by any better theory, in which case we would drop it too.
So it is a relative claim about an explanation, relative to another, not versus some absolute criterion of goodness.
So what? I didn’t mention an absolute criterion. My original criticism already applies to both relative and absolute criteria of quality (what you call “goodness”).
#3836·Bart Vanderhaegen, 4 months agoThe criterion for HTV applied to 2 explanation is not justificationism I think. It allows to say explanation A is better than explanation B, which is equivalent to: explanation B is worse than explanation A. So it is a relative claim about an explanation, relative to another, not versus some absolute criterion of goodness. Similar to a crucial test (e.g. Eddington): we refute Newton's theory and keep Einsteins, that is not a claim about the goodness of Einsteins theory, that theory merely has survived, it has not gotten "goodness points". It could be refuted always later on by any better theory, in which case we would drop it too.
Similar to a crucial test …
But that’s exactly where HTV differs from Popper. Popper doesn’t give a theory points when it survives a crucial test. HTV does. From BoI chapter 1:
… testable explanations that have passed stringent tests become extremely good explanations …