For one thing, you really only gain by trying to be constructive. If, in some logical system without LEM (law of excluded middle), you cannot find a proof of P without using LEM, you still have a result, namely you can prove LEM -> P. You haven't really lost anything! Furthermore, I have often been surprised just how far you can go without LEM. Very often in fact.
Finding constructive proofs can feel very much akin to removing an unnecessary dependency from a codebase. Also, even if ultimately, you do need LEM somewhere it is often possible to factor it into the "right" place in the proof rather than globally just sticking `LEM ->` at the front of the whole proof.
Also, if you care about, first order logic then in fact you should love intuintionistic logic: it is a finer logic that actually contains first order logic: https://en.wikipedia.org/wiki/Double-negation_translation
Lastly, though I have yet to understand this better, constructive proofs seem to be enjoy better computational properties under Curry-Howard.
This also says why classical logic can still be useful: if I'm going to use continuations all the time, it's nicer to program with a language that supports them than to explicitly program in CPS. Pick the right language/logic for the task at hand!
Practically, intuitionistic logic has a lot of applications to computer science thanks to Curry-Howard.
Philosophically, soundness of LEM depends on where one originates the source of mathematics. If you're like Kant (or Brouwer) then LEM would make little sense since the source of mathematics, just like any other cognitive ability, is human cognition which exists independent of abstractions of the outer world. But, even without being a Platonist (like Godel) if mathematics has any sort of external source (e.g. abstracting the world or being a good abstraction of the world) then it's really hard to argue against LEM, although some epistemological arguments exist.
soundness of LEM depends on where one originates the source of mathematics
One of Goedel's many great contributions was the double-negation translation (later refined by others), and what it does is show that (simplifying a bit) intuitionistic logic and classical logic are equiconsistent! In other words: any contradiction you can derive in classical logic, you can convert into a contradiction in intuitionistic logic. (The reverse direction is trivial because all intuitionistic proof principles are also classical.)
Just wanted to chime in regarding the last sentence: There are right now ongoing efforts to eliminate any uses of LEM in algebraic geometry.
We do this not only out of academic curiosity, in order to create a more elegant setup and to be able to extract algorithms from proofs, but also to carry out a program called "relativization by internalization". The idea is that we can automatically derive some results for "relative schemes" if we have proven them for ordinary schemes without using LEM or the axiom of choice.
Linear logic, for example, is finer-grained than both classical and constructive logic. Both can be embedded in it (just as classical embeds into constructive), and it makes distinctions they lack. But linear logic, too, comes in classical and intuitionistic versions, and again classical linear logic embeds into intuitionistic.
And it doesn't stop there. Linear logic can be embedded in ordered logic, and ordered logic into the logic of bunched implications. For all I know there are even finer-grained logics out there.
But doing all of mathematics in the logic of bunched implications would be an impractical nightmare, I think. I tend to prefer choosing whether to work classically or constructive (or linearly, or orderedly) according to whether it illuminates a relevant part of the problem. Sometimes you care about constructivity; sometimes you don't. If you don't, then don't waste your time trying to do the proof constructively.
The trouble, of course, is that you don't always know whether working constructively will provide illumination until you try it! :)
Wait, really? As I understand it there are things provable in LL that can't be proved in ILL. Can't find the citation though.
It should be noted that double negation translations are more complicated at the proof level even though they capture provability.
This sounds more like a motivation than like a trouble.
It seems rather obvious that the logic you are using has to match the behavior of the objects you are applying the logic to or you will of course end up with wrong conclusions. If we change the example a bit, the particle is in region R with probability zero or the particle is in region R with a probability larger than zero, then we again have a case where the set of possibilities is nicely partitioned into two mutually exclusive and exhaustive sets, probability zero and probability not zero. But I guess I am missing something or there wouldn't be a lot of smart people fighting over the issue.
you should be able to copy/paste it into https://leanprover.github.io/live/latest/
But besides that I did not really want to make any claims and certainly nothing that could be easily addressed by a proof, I think. My point was more that one has to take care that the type of logic one uses has to fit together with the domain one wants to reason about with that logic. If in the domain you want to reason about the law of excluded middle does not hold or only does so in a non-straight forward way, then using a logic with law of excluded middle will at least be not as easy as it could be or even painful or in the worst case lead to wrong conclusions.
If you understand mathematics and logic as merely a formalized language of symbols then you can judge those systems on certain qualities (complexity, clarity, expressiveness, etc.) but not "truth." The creation of a logical expression analogous to a natural phenomenon should instead use the methods of science hypotheses and testing.
"The Unreasonable Effectiveness of Mathematics in the Natural Sciences" by Eugene Wigner https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.htm...
> The first point is that mathematical concepts turn up in entirely unexpected connections. Moreover, they often permit an unexpectedly close and accurate description of the phenomena in these connections. Secondly, just because of this circumstance, and because we do not understand the reasons of their usefulness, we cannot know whether a theory formulated in terms of mathematical concepts is uniquely appropriate. We are in a position similar to that of a man who was provided with a bunch of keys and who, having to open several doors in succession, always hit on the right key on the first or second trial. He became skeptical concerning the uniqueness of the coordination between keys and doors.
> Most of what will be said on these questions will not be new; it has probably occurred to most scientists in one form or another. My principal aim is to illuminate it from several sides. The first point is that the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it. Second, it is just this uncanny usefulness of mathematical concepts that raises the question of the uniqueness of our physical theories.
(The article happens to mention the Brouwer-Hilbert controversy but only in a footnote).
Most importantly, the leap that if a medium can be used to create accurate representations, it must contain inherent, unquestioning truth about that represented phenomena, is just hogwash. Michelangelo could sculpt masterful representations of the human form, but the beauty and accuracy of the representation are the products of a skilled master, not the inherent humanness of marble.
It's not just the medium _can_ be used to create accurate representations, by a skilled master. It's that mathematical concepts created with no regard for the natural world but purely on their formal elegance, turn out, years later, to be an elegant way to describe and predict natural phenomena.
I don't think it "proves" anything, it's just... mysterious. The mystery doesn't go away if we insist that math is just an arbitrary human invented formal system. There's no a priori reason it should work to describe and predict the natural world at all, is there?
We could invent other arbitrary formal systems based on completely different axioms and rules that _weren't_ useful to describe and predict the natural world at all, could't we? I think?
I don't know where this characterization of "proof" comes from. It is merely an essay, and not intended as anything other than one. When I read it years ago, I did not get the sense that Wigner was asserting inherent mathematical truth.
>It focuses on cherry-picked phenomena that are easily represented by simple math, but ignores those where math struggles or appears to be inherently incapable.
Does he assert anywhere that any phenomenon will yield easily to mathematics? I don't think so. I think his point is that there's no good reason that even the phenomena that are well explained by mathematics should yield so easily to it. It's not "All of nature is surprisingly mathematical." It's "I see no good reason that a lot of phenomena is mathematical."
I'm too lazy to reread the essay, so I may be remembering it wrong.
Any particular "arbitrary formalized language of symbols" is validated by experiment, either explicitly or implicitly. Humanity/ culture keeps the string games that works and jettison the ones that don't; it is a slow, collaborative process. We have worked out a pretty good "arbitrary formalized language of symbols", but not because we derived these string games or intuited them, but rather because we test them daily.
To be a formalist doesn't imply that a person under-appreciates the mysterious power of mathematics, just that they don't believe in an underlying psuedo-Kantian "truth".
Also, as I understand it, advanced physics is necessarily done formally only, since our intuition is so ridiculous at the atomic scale and all the equations like Schrodingers are only postulated, then verified empirically.
Also, it is snarky and under-informed, but I think of this debate as Hilbert (Hilbert!) and Some Guy.
> The creation of a logical expression analogous to a natural phenomenon should instead use the methods of science hypotheses and testing.
This is usually called "physics".
Interestingly, your latter point is exactly formalism (the "formal" part is referring to the formalized language), i.e. Hilbert's position. Formalism basically asserts that mathematics is a fancy string manipulation game.
Of course, the proofs are first constructed in the context of meaning, but the entire process seems to be a struggle to context-switch between the realms of meaning and symbology, which do not necessarily mirror one another.
But Godel has somehow proved that symbols cannot always provide both a consistent and complete system of meaning.
Gödel seems very much related to this, but I'm still lost.
On the other hand, unprovable statements never make it to the "but can be explained" stage: either the formal system is consistent, and these expressions by definition cannot have an answer, or they can be answered but the formal system is demonstrably inconsistent.
The formal system also needs to be a little more precise than "significantly expressive": it needs to allow for self-referential expressions. In effect, it must be possible to create an expression that can be mapped to the natural language sentence "This statement has no proof in this formal system".
While in natural language puzzles that might be a fun paradox to try to figure out, where the challenge is to determine whether, given the rules of the context, the answer is actually "true" or "false", when it comes to Godel sentences for formal systems there is never a paradox. Either they are undecidable, or the formal system is inconsistent. And we don't get an "out" by saying "but in reality, the answer is ..." because the whole point of formal systems is to be self-contained. There is nothing "outside" of them that we can use to make claims about expressions within that formal system. We only get the axioms of the formal system itself.
The above is a complete misrepresentation of Gödel’s second incompleteness theorem.
The theorem holds for any sufficiently expressive system, where the term ‘sufficiently expressive’ has a precise formal meaning. Moreover, it is not the case that any sufficiently expressive system must be inconsistent! The theorem could perhaps be paraphrased as ‘no sufficiently expressive consistent system can prove its own consistency’, as long as we are prepared to explain what ‘its own’ really means.
Importantly, there are systems that do not allow paradoxical statements, and are nevertheless quite expressive, in an informal sense. For example, any dependently typed total functional programming language, such as Agda, Coq, Idris, or Lean.
All of these languages are designed to be consistent. Of course, implementation bugs do happen.
If you would like to learn more about dependent types, ‘The Little Typer’ is newly out and a lot of fun to read. For an accessible introduction to the concept of total functional programming, see Turner 2004.
 Hilbert-Bernays provability conditions, https://en.wikipedia.org/wiki/Hilbert–Bernays_provability_co...
 Gödel’s second incompleteness theorem, https://en.wikipedia.org/wiki/Gödel%27s_incompleteness_theor...
 Dependent type, https://en.wikipedia.org/wiki/Dependent_type
 Total functional programming, https://en.wikipedia.org/wiki/Total_functional_programming
 Agda programming language, http://agda.readthedocs.io
 Coq proof assistant, http://coq.inria.fr
 Idris programming language, http://www.idris-lang.org
 Lean theorem prover, http://leanprover.github.io
 The rise and fall of @proofmarket, https://twitter.com/i/moments/921153475836305408
 Friedman, D.P., Christiansen, D.T. (2018) ‘The Little Typer’, https://mitpress.mit.edu/books/little-typer
 Turner, D. (2004) ‘Total Functional Programming’, http://www.jucs.org/jucs_10_7/total_functional_programming
To be even more precise, the system or theory T proves that there is no number n which provides a proof of contradiction for the axioms of T.
So its the axioms of the systems which all formal systems assume and hold to be true. Basically its impossible to prove that axioms are true with the same system.
In short, no formal system can define its own consistency because in order for its axioms to be true, the system must be inconsistent.
The state space is far too large to show a program is correct by constructing all possible processes it could execute. Instead you must show that the program admits no incorrect processes whatsoever, and as far as I know mathematical formalism is the only possible way to do that.
Of course if one just wants to make money, formal correctness is observably of absolutely no importance. And there's nothing wrong with wanting to make money!
It always seems to me that using the LEM over infinite sets is akin to claiming that all problems are decidable.