NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Can Computers Prove Theorems? (chalkdustmagazine.com)
dang 1609 days ago [-]
saagarjha 1609 days ago [-]
agentultra 1609 days ago [-]
Lean is also a really, really good dependently typed functional programming language. If you're on the software developer/computer science side of the coin I highly recommend looking into it. It's not only for maths research (although that is definitely a big focus of the majority of the Lean community).

I'm working on a forthcoming series of blog posts investigating FP concepts from scratch in Lean to see how it takes with the FP-curious.

The refreshing features Lean has to offer over other FP languages like Haskell is that

1. It's dependently typed which gives it a lot more flexibility to define objects that depend on types and types that depend on values, such as vectors that know their size.

2. Lean is interactive by default -- type inference is really fast, even for complex objects and functions.

From my perspective Lean is a great programming language that happens to have a theorem prover available. This, in itself, can be a great feature for code that needs to provide strong guarantees about its operational parameters.

infogulch 1609 days ago [-]
How does Prolog relate to more formal languages like Lean/Coq? Prolog (pretty easily) has dependent types [1] as well. Is there something in the language itself that would make it unsuitable as a theorem prover, or is there something it's missing? I'm trying to get a feel for Lean from the vantage of a bit of familiarity with Prolog.

[1]: https://llaisdy.co.uk/2014/05/07/dependent-types-in-prolog/

agentultra 1608 days ago [-]
I'm not sure I can answer that question. I don't know enough about Prolog. I can say that your example could be encoded in Lean as a type and the definition would probably be much shorter.

Does Prolog automate some of the work involved in writing a proof?

tel 1609 days ago [-]
The title is a bit misleading. The author doesn't really discuss whether computers can perform the act of proving. He's mostly focused on whether computer languages can represent proofs (yes) and whether computers can check them (yes).

Computer can also search for theorems and proofs to those plausible theorems. There are issues, of course. Both of these spaces are very, very large. It's not clear what makes for an "interesting" theorem to prove.

Proof through proof search is also semi-deterministic in that the search will continue until it finds an answer. If the search hasn't terminated yet, it's either because there is no proof or because it just hasn't worked hard enough... yet.

You could conceivably "race" searches between P and (not P) and hope that this would save you since now one of the two will return in finite time. Unfortunately, most of these proving languages are constructive and thus it's possible for both of those to be unprovable.

It's all very cool, but we're still not getting close to having computers realistically "prove" things of interest.

Edit: Worth also noting quickly that proof search is used pretty regularly in these systems, but it's a sort of localized, partial, and interactive proof search where "tactics" for the search are human specified or search is used as an interactive tool for skipping over "boring" parts of proof construction.

black_knight 1609 days ago [-]
> Unfortunately, most of these proving languages are constructive and thus it's possible for both of those to be unprovable.

The fact that it is possible for a given proposition P to be neither provable nor disprovable has nothing to do constructive mathematics. This happens in classical mathematics as well – in any incomplete theory. All theories which serve as foundation of mathematics are incomplete (ref. Gödel).

It is just that a constructive mathematician can be at peace with this situation, since they does not believe P ∨ ¬P to be a tautology – while a classical mathematician will forever be tormented by the existence of theorems they cannot prove in their chosen formalism.

drdeca 1609 days ago [-]
I was going to say "what if you look for a proof of (¬¬P)∨(¬P)" , but, to my surprise, apparently that is also not a tautology in intuitionistic logic? So, I knew that ¬¬(P ∨ ¬P) is a tautology in intuitionistic logic, but, to my surprise, (¬¬P)∨(¬P) is not.

Here is a twitter bot that evaluates whether statements are tautologies in intuitionistic logic, and it gives the answer that (¬¬P)∨(¬P) is not, and gives a "kripke counterexample" https://twitter.com/ipc_bot/status/1189953697486229504?s=20

Mentioning this because I was going to respond by saying "but isn't (¬¬P)∨(¬P) still a tautology? Then, how does intuitionistic logic / constructive logic, help with avoiding the problem?", but then I checked, and was surprised, so I thought other people might have the same misconception I had, so I'm saying this.

However, if (P is provable in classical logic)∨((¬P) is provable in classical logic), doesn't it follow that either ((¬¬P) is provable in intuitionistic logic) or ((¬P) is provable in intuitionistic logic) ?

black_knight 1607 days ago [-]
I can give you some intution as to why (¬¬P)∨(¬P) isn’t provable. The essense of a proof of a disjunction A ∨ B is to either have a proof of A or a proof of B – as well as a label telling if we proved the left or the right hand side. So, to have a proof for every P that (¬¬P)∨(¬P) means that for every P you can either cough up a proof that ¬¬P or a proof of ¬P. This is impossible.

Your final paragraph mixes object level and meta-level in a way I am not quite able to decipher. But there are results in a similar vein, but restricted to particular theories:

The Gödel—Gentzen translation[0] translates formulas of Peano artithmetic into formuals in Heyting arithmetic in a way which preserves provability. But there are quite limited which theories this works for. For instance, such a translation does not exist in set theory between, say, ZFC and CZF. So there is in general no way for a constructivist to give meaning a classical proof.

[0]: https://en.wikipedia.org/wiki/G%C3%B6del%E2%80%93Gentzen_neg...

fnrslvr 1609 days ago [-]
> while a classical mathematician will forever be tormented by the existence of theorems they cannot prove in their chosen formalism.

As a devotee to the law of excluded middle myself, I don't feel particularly tormented by this.

(I agree with your wider point that constructivism isn't the root cause of independent statements in mathematical foundation theories.)

fnrslvr 1609 days ago [-]
> You could conceivably "race" searches between P and (not P) and hope that this would save you since now one of the two will return in finite time.

Adding to black_knight's point, it's worth noting that even in complete theories this often isn't very heartening: interleaving searches for P and for (not P) will often take an enormous amount of time.

Even cleverer strategies can be hopeless. For example, the theory of real closed fields is complete (and hence by your interleaving argument, decidable), but appears to be EXPSPACE-complete (at least from a brief skim of the literature), hence known to be truly intractable. If standard complexity-theoretic conjectures hold up (e.g. P != NP, NP != coNP), then non-gargantuan proofs in proof systems for a wide variety of interesting problems will turn out to be both intractable to locate and incomplete.

None of this is to say that automated theorem proving can't or won't rise to the level of human mathematical activity (I think it will fwiw), but we need to be far cleverer about it.

xvilka 1609 days ago [-]
Yes, it's already possible[1] with Coq. There are two strategies for that - lowering the CIC (Calculus of Inductive Constructions) to SMT[2] and employ SMT solvers like Z3, and employing Deep Learning[3][4]. For the Lean part, compared to Coq - there is an interesting discussion[5] why Lean might be not as good as they claim.

[1] https://coq.discourse.group/t/machine-learning-and-hammers-f...

[2] https://github.com/lukaszcz/coqhammer

[3] https://arxiv.org/abs/1907.07794

[4] https://github.com/UCSD-PL/proverbot9001

[5] https://github.com/coq/coq/issues/10871

mcguire 1609 days ago [-]
"This is the last Lean proof which I will give here. The reason is that for proofs any more complex than this, it gets hard to follow them on paper. The way to understand these proofs best is to look at them in Lean itself. If a proof were running in Lean on a computer then you could click anywhere on the proof and see Lean’s state (what it knows and what it is trying to prove)."

Yeah, there's the problem: Proof-assistant produced proofs are difficult to read as proofs. The alternatives are to execute them, as you would execute a program in a debugger to understand it (um, ick), or to "pretend to be the computer" and execute them manually.

The interesting part is that exactly that problem is faced with software, which developed ways of writing programs that are easier to understand, including using formal methods.

derefr 1609 days ago [-]
Humans like to pretend business-logic workflow digraphs are actually workflow trees. Every programming language† is an attempt to "tell the story" of the runtime paths through a digraph of control-flow, using a lexical tree of syntax structures. I've only written a few proofs in my life, but I get the sense that "at scale", they're quite similar: a narrativization or expiation of a bunch of paths through lemma-space, that branch out and re-converge.

When any program becomes sufficiently complex, its nature as a digraph is forced to the surface, and the ability to continue to pretend it's just a tree becomes untenable. We get confused, working with these complex digraphs, unless we use tools that allow us to visualize digraphs as digraphs, walking around them and looking at them from different angles.

Honestly, I don't think this is anything to do with "doing programming [or proof-writing] wrong", so much as it is an inherent fact of the nature of systems of lemmas. It's a bit like how humans can't visualize hypercubes directly—we need to instead use a tool (lower-dimensional projection) to "walk around" these mathematical objects and look at them from different angles, in order to understand them. I would posit that systems of lemmas are just such a mathematical object.

Sure, you can try to find isomorphic programs, or proofs, that are more lexical-tree-like, and so easier for humans to hold in their heads. But if you require that your programs, or proofs, fit to some standard of tree-like-ness, there'll be some programs/proofs that you just won't be able to write. Maybe that's not so bad for programs (those programs were probably awful anyway), but ignoring a proof (or an area of proof-space) when it's perfectly valid, just because we need tools to look at it, seems a bit silly.

---

† Or rather, every programming language that people use in practice, because even in languages with GOTO, people avoid it, and restrict themselves to the structured-programming subset of the language. The programmer who wants to write everything as coroutines or (call/cc)s is pretty rare.

eli_gottlieb 1609 days ago [-]
I'd be very curious to know, empirically, whether people who aren't trained in computer science, software engineering, or pure mathematics consider reading a proof or reading a program to be easier and more understandable.
solveit 1609 days ago [-]
I would guess that they bad enough at both that the difference doesn't really matter. I would be interested in what someone at the highest levels of all three fields thought, but I'm not sure these people exist.
not_a_math 1609 days ago [-]
We have a bunch of theorem provers - Lean, Coq, Agda, Isabelle

If you prove something in Lean, can you automatically convert the proof into a Coq one (even if it looks like uglyfied/minified code)?

It feels like it should be possible to "merge" the proof repositories of these languages together into a common one.

Or to put another way, if you can prove in Coq that Lean is correct, then it should imply that Lean proofs are correct too.

drdeca 1609 days ago [-]
I asked a similar question in the comments on a youtube video, and Mario Carneiro, one of the people who work on Lean, answered me.

He said he does work on this area, and it kind of works (at least for the theorems you would want to translate?), but in addition to the proofs being very not nice to read, the way the statements of the theorems are translated are, by default, also rather not nice (being statements about specific formulations of the objects (such as the set of integers) as they are expressed/defined in the language being translated from, instead of referring to the corresponding objects that have been defined in the language being translated to. However, he says that it doesn’t take all that much work to manually massage it until the statement of the theorem is expressed in the way that you want it to be.

I’m on my phone right now, and the phone youtube app doesn’t give me a permalink, but I’ll try and edit this comment soon on laptop to include link to the comment section.

Edit : here are links to what Mario Carneiro said in the thread click the read all replies to read the full thread)

https://www.youtube.com/watch?v=Dp-mQ3HxgDE&lc=UgwpFSmapLZ5S...

https://www.youtube.com/watch?v=Dp-mQ3HxgDE&lc=UgwpFSmapLZ5S...

phonebucket 1609 days ago [-]
Lean has been going hard at PR at the moment.

I would really like to dip my toe into the theorem proving waters from a scientific/mathematical perspective.

I'm not sure whether to start with Coq, Agda, Isabelle or Lean. Does anyone on HN have a feeling as to a sensible one to start with?

tom_mellior 1609 days ago [-]
Start with the one with the best documentation. It's hard to beat Software Foundations: https://softwarefoundations.cis.upenn.edu/, which happens to be for Coq.

If you search the web for "Software Foundations in X", you will find partial ports of some of the initial chapters to pretty much any other proof assistant. Maybe nowadays they are full, complete ones that would allow you to use Isabelle or Lean instead.

phonebucket 1609 days ago [-]
Software Foundation and Coq looks like a fantastic resource. Thank you very much.
tel 1609 days ago [-]
Coq is definitely the best documented. Unfortunately, the tendency to use the tactics language—while incredibly practically useful—obscures how the structure of proofs relate to their theorems.

This isn't really a problem per se, but instead an invitation to look into, say, Agda to see that side of things, too.

zozbot234 1609 days ago [-]
Agda just uses proof terms directly - you could do that in Coq and Lean as well. It's a bit of a bummer though that neither Coq nor Lean have a currently-maintained declarative mode ala Mizar/Isar. The former C-zar included in older Coq versions (aka Mathematical Proof Mode) was especially nifty, albeit not totally free from bugs.

(One other approach is to generate a "declarative", human-readable version of the proof directly from the proof term. Very old versions of Coq (6.x series) and IIRC even Agda (1.x) could essentially do this. Declarative proof mode integrates better with complex tactics however - proof terms can sometimes get a bit weird.)

tel 1607 days ago [-]
I agree that you can—I've done that in Coq, though I don't have any experience with Lean.

The advantage of Agda is that everyone else does, too. This provides two things: one, there's a huge set of examples of how people have constructed their proof terms which encode best practices and are themselves designed to be as readable as possible, and two, people also build tooling which helps you to make them.

In Coq, since tactics are primary, you end up with things like Sledgehammer. This is super practical and useful for tactics management, but it's a whole culture that will take you further and further away from understanding how proof terms work.

1609 days ago [-]
vilhelm_s 1609 days ago [-]
Coq and Isabelle are much more mature systems, they have already been used in a lot of big developments, and they have large user communities and existing textbooks and so on.

Agda is pretty finicky, proving stuff in it is harder and the entire project feels more researchy (as a playground to try out ideas for programming languages).

Lean seems promising, but it's still kind of new, so compared to Coq will not find as much documentation, and there are fewer people on Stack Overflow to answer questions.

zitterbewegung 1609 days ago [-]
I have tried to use some of these theorem proving systems . Since Coq is pretty popular (which means lots of documentation and examples ) I would try that
dwheeler 1609 days ago [-]
Computers can directly prove some theorems today, and in other cases there are various interactive provers that let humans and computers work together to find formal proofs. In some systems using computers can give much better confidence that the proof is correct, because computers are very good at carefully checking every step; nitpicking is their forte and they don't get tired.

Freek Wiedijk maintains a list of 100 challenges for math formalization and the status of various systems here:

http://www.cs.ru.nl/~freek/100/ - Formalizing 100 Theorems

Lean hasn't accomplished as many as some others, but it's certainly a contender!

thanatropism 1609 days ago [-]
Jean-Yves Girard says that true theorems have corollaries.

Computers can prove all sorts of propositions, but the gooey philosophical question is -- which propositions are Theorems in the grand civilizational project of mathematics?

arethuza 1609 days ago [-]
So software can mechanically crank through and generate propositions but which of these are actually worth paying attention to?

Edit: Has anyone tried deep learning (or similar techniques) to do the identification of interesting theorems?

black_knight 1609 days ago [-]
Yes, the job of a mathematician is not to prove theorems. But rather to find the interesting or beautiful ones.
Laakeri 1609 days ago [-]
However currently the most difficult and time consuming part of the job is proving those interesting theorems.
devinplatt 1609 days ago [-]
I'm genuinely curious, why do you believe this? From experience? From hearing mathematicians say so?

I'm currently working on my first formal math paper, and I've been spending a lot of time figuring out how to best present the information so that it is comprehendible and easy to read. Things like structure of the paper, wording, notation, and more.

There's probably a lot of time to spend convincing others that your theorem is interesting, too. Some of the most respected works in mathematics after all are thoughtful compilations of previously proven results.

mannykannot 1609 days ago [-]
"However, if computers do get better than humans at proving theorems, I might be out of a job anyway."

Not until computers are coming up with useful axiom-sets and effective tactics, I suppose.

DailyHN 1609 days ago [-]
If this is interesting to you, I recommend "The Book of Why" by Judea Pearl.
acd 1609 days ago [-]
There exists the software Coq which is a theorem prover. https://en.wikipedia.org/wiki/Coq
frabert 1609 days ago [-]
Correction: it is a proof assistant, not a theorem prover. Z3 (also from De Moura) is a theorem prover.

EDIT: Main difference between the two is that with a proof assistant, you write a proof and the assistant checks its correctness, while a prover provides you with a proof of a proposition's correctness (or of its lack thereof).

nocturnial 1609 days ago [-]
This article is about lean and I really like and want to like lean. But... At the moment it bothers me that the authors still consider it an experimental engine. In this case it means development stopped on version 3 when they started version 4. No bugfixes, no enhancements are accepted into version 3.

When/if they make the mental switch from experimental to a supported product, I'll definitely use it and maybe even contribute to their proof library (if there are low hanging fruits left).

3PS 1609 days ago [-]
> Occasionally we mathematicians find that we need a new tactic—for example proving that (a+b)^3=a^3+3a^2b+3ab^2+b^3 just from the axioms of a ring is surprisingly difficult!

Perhaps because what you need there is not just a ring, but a commutative ring :P

Jokes aside, that was an excellent article and introduction to interactive theorem proving. I'm looking forward to seeing what Lean 4 brings to the table when it's released.

segfaultbuserr 1609 days ago [-]
For (some) much simpler first-order logic systems, the answer is certainly true. Tarski's axioms imply that there exists an algorithm that, for every proposition in (a restricted) Euclidean geometry, can be shown either true or false.
RedComet 1609 days ago [-]
You can axiomatize anything. The axioms are chosen because they reflect reality. The axiom reflects the blocks, not the other way around.
1609 days ago [-]
beamatronic 1609 days ago [-]
Computers can prove any theorem which can be proven by a computer.
Yajirobe 1609 days ago [-]
ctrl+F

Gödel

ctrl+w

fnrslvr 1609 days ago [-]
If the headline question were something more like "can computers prove all theorems reliably?" this response would be more understandable. But a fairer interpretation would be "do automated theorem proving techniques stand a chance of making novel contributions to the mathematical literature?", to which I think it's pretty unfairly dismissive to rule out the possibility at this point. Considering it's still pretty early days for automated theorem proving, there's already some promising progress: an automated prover resolved the Robbins conjecture.

If you think Gödel's incompleteness theorems are hard evidence that computers can never consistently perform on the same level as humans at theorem proving, take it from someone who understands Gödel's results well: you're mistaken.

---

I think one sense in which the headline seems a little misleading, though, is that on a skim it looks like it's more about formalizing mathematical knowledge so that computers can verify theorems proven largely by people, than it is about computers themselves proving the theorems. (Which is also a fine topic, I'm not complaining.)

black_knight 1609 days ago [-]
Gödel’s incompleteness theorems are actually not very relevant to computer formalisation of mathematics. They crushed the naïve hopes of some classical logicians in the beginning of the 20th century. But they do not pose any further restrictions on what computer can do, as opposed to what a human can do, with any specific set of axioms.
tom_mellior 1609 days ago [-]
Lovely poem.

One reading of it is something like "if computers can't prove that computer proofs are always correct, then we can't be sure of anything", but that is true of human proofs as well, so we might as well Ctrl-W mathematics itself.

Another reading is something like "this very introductory article didn't mention whether we can formalize Gödel's incompleteness results in the computer, so it's too introductory for me". Fair enough.

ColinWright 1609 days ago [-]
That's a short and pithy response, but I'd like to know more.

Imagine, if you will, that you are addressing an audience that knows what Gödel's results are. In particular, assume that we know Gödel's proof that any formal system that's sufficiently complex to hold a model of arithmetic has statements that are true, but unprovable within the system.

Now, can you explain to us why this result makes it pointless to read the article?

yters 1609 days ago [-]
Can computers solve the halting problem? If not, then they cannot find their own Godel sentence, and thus are limited in their ability to prove things.
drdeca 1609 days ago [-]
Is the problem with finding the Gödel sentence, or with proving it?

I believe the issue is the proving it.

For a sentence to be a Gödel sentence of something, that something, I think, should be a formal system, a set of inference rules and axioms. While there may generally not be much issue in talking about "the Gödel sentence of [a process/thing that uses such a system]" to refer to the Gödel sentence of the system that the thing uses, in this case, I think there is no issue. Finding the Gödel sentence of a system is, iirc, a fairly straightforward computational process, given a good description of the system.

So, it should be no difficulty to have a program that takes its definition of a system, and checks proofs in that system / looks for proofs in that system, and also computes the Gödel sentence of that system.

Furthermore, I think (but am slightly less sure, though still fairly sure) that consistent and sound systems can prove things like "if the system [description of self goes here] can prove [Gödel sentence of the system], then it is not consistent." .

Furthermore, I see nothing establishing that humans can do any better than computers in this regard.

Be careful when using the Gödel incompleteness theorems as justifications of positions! They are often misinterpreted.

yters 1608 days ago [-]
Humans perhaps cannot goedelize themselves, or perhaps it's not possible to formalize the mind. However, in either case, it also intuitively seems that humans can goedelize any finite, formal system. Thus, the human mind is not reducible to a finite, formal system.
drdeca 1607 days ago [-]
A human only can fit only finitely much information in their memory. There are formal systems that no human could fit a description of into their mind.

Also, the way humans reason seems more like something paraconsistent than something truly self-consistent. A person can be convinced of something by some argument, and then convinced the other way.

It is true that the human mind is not an example of [thing which the incompleteness theorems show cannot exist], but this is not surprising.

yters 1606 days ago [-]
We can augment our memory with physical storage. We might not analyze within the lifetime or capacity of the universe (or multiverse), but that's a practical limitation, not a theoretical one.

Humans are irrational and all, but we make surprisingly consistent formal systems. I don't think an inconsistent system could do this, so this suggests the human mind is something beyond finite formal systems.

For example, we could model this ability as access to an infinite table of all possible Turing machines and their halting status, perhaps with some noise added to represent our irrationality. We would still have something beyond the limitations of goedel's theorem, as well as computation and randomness, which would account for our intuition in the matter.

So there is no good reason to dismiss the idea there is something special about the human mind not captured by formal systems.

drdeca 1605 days ago [-]
I do agree that there is something special about the human mind not captured by formal systems (in the "set of axioms or axiom schemas + inference rules" sense). (not so much for empirical external evidence reasons, so much as for like, what I want to call "teleological reasons"? Like, my goals and values are more coherent/less nonsensical assuming that there is something special, so it makes sense (as far as my goals are concerned) for me to assume that there is something special.)

However, I'm not so sure that this something special includes anything Gödel related.

As to whether an "inconsistent system" could "make" consistent formal systems, I'm not entirely sure what you mean. A formal system (in the sense of set of axioms + inference rules) is not an agent, so I'm not sure what you mean about it "making" things. We talk about a formal system prov"ing" something, but perhaps a more clear way of saying what we mean by that, is that the system admits a proof of the thing, or that there is a proof of the thing, for/within that system.

If by a system "making" a system, we just mean that it "proves" (as in, "admits a proof") that some other system is consistent, then I see no reason that a paraconsistent system couldn't show that some other system is consistent.

Have you seen the "Logical Induction" paper by MIRI ?

If a system is inconsistent, then there is a proof that it is inconsistent. Logical induction assigns "probabilities" to different mathematical statements, and it can be shown that these probabilities converge in reasonable ways. If there is a proof of a statement in the system that the logical induction thing is working under, then the "probability" of that statement will converge to 1. Also, this statements that logical induction applies probabilities to include statements about itself.

I think that, if some particular system is self consistent, then, well, for one thing, the probability for the statement "that system is inconsistent" will not go to 1, and furthermore, that there are statements of "the probability for 'that system is inconsistent' will be below c at step t" (for t much larger than the current time step) which will (I think) go to 1.

In that sense, I think logical induction could probably do a good job at concluding that some systems are probably self consistent.

Now, I'm not totally sure of that. I don't 100% understand the logical induction results. But, yeah. Seems relevant to me.

yters 1605 days ago [-]
Not looking for agreement. I think there is an empirical, mathematical case to be made for this 'specialness' of the mind that is beyond what computation and randomness can give us.

Another way to state this is in terms of the halting problem. Halting oracles are possible and detectable. Halting oracles can authenticate the Gödel sentences for all finite, consistent formal systems.

Looking at the logical induction page, I'm not sure it offers anything beyond Solomonoff. If they have a computable method, then it will be deficient compared to Solomonoff induction. Leonid Levin published something called randomness conservation in 1984, which implies computers cannot learn math, or any other consistent logic, beyond what they are initially fed with. So, given an initial set of consistent axioms, a computer cannot come up with anything new and consistent.

Anyways, bottom line of what I'm saying is halting oracles are possible and empirically detectable. We should entertain the possibility that the human mind is a halting oracle, and set about researching that idea. We are making no progress with our assumption the mind is computational.

drdeca 1604 days ago [-]
Solomonoff induction can make eventually good predictions about any computable environment, but: While Solomonoff induction is limit computable, it isn’t computable, while logical induction is.

Logical induction can reach conclusions about what a system using a computable approximation to solomonoff induction would conclude. Actually, it can also assign probabilities to what full solomonoff induction would conclude.

Solomonoff induction cannot accurately model a world which contains agents who use solomonoff induction. Logical induction plays nicely with self reference.

Importantly, with some naive ways of attempting to do probabilistic induction on mathematical statements, if you give the system different facts in an adversarial order, you can cause it to alternate between being highly convinced that something is true, and that it is false. Logical induction solves this problem.

I don’t think solomonoff induction can be straightforwardly applied to this task. Solomonoff induction predicts what inputs it will receive, but this isn’t, I think, immediately well suited for assigning credence to mathematical statements.

I’m not sure how a halting oracle could be empirically detected? Well, I guess if one believed that “this is like a halting oracle except only for the first 3^^3 Turing machines” and the like we’re substantially less likely than “this is a halting oracle in general”, which I guess seems reasonable.

Ok, I guess I can see that, sorta.

Not sure what practical tests you have in mind though.

yters 1604 days ago [-]
Everything is a bitstring, and Solomonoff is optimal for predicting bitstrings. Probabilistic induction, if it is computable, cannot do better than that.

Halting oracles require fewer program bits to generate longer bitstrings, so they make compressible bitstrings more likely than Turing machines. Applying bayesian reasoning, the existence of compressible bitstrings implies the existence of halting oracles. Or, even simpler, given the fact Turing machines might not halt, but halting oracles can always halt, the mere existence of bitstrings implies the existence of halting oracles.

drdeca 1604 days ago [-]
Solomonoff induction is optimal for, given an initial portion of a computable bitstring, predicting the rest of (or the next portion of) the bitstring.

Logical induction (my apologies for referring to “probabilistic induction”; I was speaking unclearly when I did so. What I meant to communicate by that phrase was “if you attempt to do Bayesian updating on math statements in a naive ‘just apply Bayes’ rule’ way”, as an example of something that doesn’t work) on the other hand, will assign probabilities to different statements, which it updates over time, and these different probabilities aren’t for mutually exclusive events, unlike with solomonoff induction.

Now, I don’t mean that if you had an oracle for solomonoff induction, that you couldn’t make a program using that oracle to produce something much more powerful than logical induction.

And I’m also certainly not saying that the algorithm found for logical induction is the fastest one for the problem it solves. If it was (at least, if it was for small amounts of time? Perhaps it could be asymptotically optimal for large amounts of time), I think it would be strong evidence that the human mind is special with respect to estimating the plausibility of mathematical statements! (The algorithm is very slow).

But, I do think that it is quite probably better at that task than straightforward applications of computable approximations of solomonoff induction. (Like computable approximations to solomonoff induction, logical induction also goes through an enumeration of all possible programs and eventually tries each one of them.)

By the way, this thread is getting to the point where I have to click your specific reply in order to get to the reply link; do you want to move this to email or discord or something? Totally fine if you want to keep it here of course, just wanted to offer the option.

Also, I need to read up on the thing you mentioned on randomness conservation. Sounds interesting. Sorry for not having gotten to that yet.

I’m confused by what you mean by “implies the existence of halting oracles”. Do you just mean as abstract objects? (in which case, yes, I completely agree that halting oracles exist as abstract objects. There is a fact of the matter as to whether any particular Turing machine halts, and so there is a function from Turing machines to whether the input Turing machine halts on an empty tape. Similarly for halting oracles for Turing machines which themselves have a lower halting oracle.) Or do you mean that they exist or might exist as physically implemented (err, not sure “physically” is exactly the word I mean. Like, if human souls are the only things in/“in” the universe that can solve the halting problem, I don’t think I’d call that “physically implemented”, but I don’t mean to exclude this option) things?

I’m currently skeptical of the latter, but that is approximately the question we are discussing, so maybe I’m wrong.

Are you saying that, you think it plausible that humans can (in a sense) compress strings in ways that Turing machines cannot, (or that a Turing machine with access to a human / to humans, can compress strings better than a Turing machine can alone), and that, if true, that would be evidence for humans containing/having-access-to halting oracles?

If that is what you mean (and my apologies if I’ve misunderstood you), I do agree that this would be evidence for that, yes (though I’m, of course, less convinced that access to humans as an oracle would allow a TM to compress strings better.).

[fifth letter of toboggan]mail btw: Madaco dot madaco

Edit: have started to read the Leonid Levin paper “randomness conservation inequalities” now. Quite interesting so far.

yters 1604 days ago [-]
Yes I'll continue over email. Good discussion! You are surprisingly open minded on the issue.
spectrum8 1610 days ago [-]
Only if contexts are defined and logically comparable. Otherwise it will just risk or tilt on question like any other human, until the risk taken take down the problem to increase the definitions. The right question would be: can computers define concepts(names for numbers) by itself? Would you trust while it makes you happy or will you investigate by yourself the magic?
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 17:07:09 GMT+0000 (Coordinated Universal Time) with Vercel.