We don't know why acetaminophen relieves pain, we don't know why lithium stabilises mood in people with bipolar, we don't know how antidepressants actually relieve depression. Would we like to know how these drugs work? Certainly, as it might help us to develop better drugs. Do we need to know how these drugs work? Absolutely not - given a choice between a mystery drug and having surgery without anaesthesia, I'm choosing the mystery drug every time.
Human beings are black-box algorithms; we can concoct plausible explanations for our behaviour, but there is abundant evidence that we don't actually know our own minds. I think our discomfort with black-box algorithms is essentially a reflection of our discomfort at the unknowability of the human mind. These algorithms work, but we don't really know what's happening on the inside, just like literally everyone you've ever met.
I tend to agree, and if you look from a species perspective there are boundaries to what other animals can also do. For instance, an ant can only do so much; they can do basic thinking and processing but they lack complex physiological structures and reasoning abilities, as far as we know, to concoct plausible explanations for their behavior. Humans can come up with explanations but it’s still hard for them to tap in deeper and ask those why questions.
On a scale of species reasoning ability complexity, I’m curious if there are other animals that have the capacity to engage with nuanced why questions. Any zoologists out there?
It would be similar to how we have been adjusting our definition of what it means to feel pain. (particularly in the context of claims that entire animal species do not feel it, so there are no qualms about preparing them as food in ways that would be considered inhumanely cruel with others - see lobsters, crabs)
Chimps recognize themselves in the mirror. So they have some theory of mind.
Dogs don't, but unlike chimps, can follow pointing directions. So they have some theory of OUR mind.
My dog is stubborn and doesn't follow some directions right away. I will sometimes see her pause, consider, and ignore the command. If I then remind her in a generic way that I am serious (like a stern generic "hey"), she will then come back and do the command. So it's more complex than a direct cause and effect.
That also sounds like a "why" question that has gone through her head...
Which is to say we don't experience each other as black boxes (regardless of whether we really are these). We experience each are partial unknowns - people often do unexpected things but we often construct plausible after-the-fact explanations and those who consisting do things without any explanation make us nervous.
Phrasing the need for understanding is a bit tricky. Certainly, a person can live with, must live with, some knowledge-without-understanding in their life. But a proliferation of things beyond our understanding in many ways undermines our position, our feeling of power in the world.
It seems like the broad progress of humanity, since the age of ... reason, has come through an increase in both raw data and broad understanding. And while understanding is tricky thing to describe, it seems like the rise of AI is actually helpful in describing what it isn't. Understanding is a kind of knowledge that humans can take into many domains and apply in many ways - mathematics is a prototypical example - the laws of physics, put in mathematics, can applied in a multitude of ways. In contrast, trends extracted from raw data may give us successful predictions and not tell us the reason.
Suppose we have a psychiatric drug that is a pure "black box", we know that consuming this drug changes a person's behavior in somewhat predictable fashion. But it doesn't tell you whether this consumption returns someone to "normal" or simply suppresses a person's other abilities - if we're treating symptoms, we learn nothing of mental illness (given our real minds remain black boxes too).
Moreover, it's not true that we deal with other people as black boxes. Rather, "folks psychology", our implicit understanding of each other's motivations, is a key part of our interpersonal relations . This process may not be correct in scientific terms but this doesn't mean it can be dismissed as a key component of our experience.
And overall, the proliferation of knowledge-without-understanding produces the problem of lack of broad control of the world, akin to a tool chest.
Edit: Ironically, "tool box reasoning", the ability to have a group of tools available and use them as appropriate and in combination, is a pretty uniquely human ability one that gives a person a wide range of options in the world. It is itself very much a black box on a meta-level but we know quite well exists (so suppose that's a useful piece of general black box knowledge).
I've arrived at those terms after spending a lot of time thinking and studying just what "technology" and "science" are. One of the most influential definitions I've come across is John Stuart Mill's: technology is the study of means, while science is the study of causes.
That's ... not fully bulletproof, but it's a very good start. Technology (from Greek, techne, is an art or skill or craft, a way of achieving some ends. While science is knowledge, especially of fundamental causes, principles, mechanisms, or dynamics.
Technology tells you what to do (and/or use, or apply, or manipulate). You don't have to know why it works, as in the example of your anesthesia, only that it works. It's knowledge, but a thinner level of knowledge than science provides.
Science tells you why a thing works. You may still not be able to influence it (we understand, generally, plate tectonics and stellar fusion, we control neither), but the understanding gives predictability around such phenomena. Plate tectonics tells us where earth movements may occur, with what frequency and severity, and the like. Understanding stellar fusion makes sense of the brightness, colours, frequency distribution, mass, and other propreties of stars, as well as of the occurrence of events on their lifecycles.
Jonathan Zittrain recently wrote of a characteristic of AI models that hasa been bothering him: that they provide solutions without explanation. The thing about an AI classifier is that nobody can tell how the classifier itself works. We can throw test data at it and evaluate the outputs, but there's no apparent causal link between input and output. AI is non-explanatory knowledge.
Whether that means it's technology rather than science, or some new domain that's not technology or science (as much as semantic distinctions has meaning), I'm not sure. Though the question's one I'd started asking a couple of years ago myself.
TFA is describing this phenomenon in two aspects: first, that the causal function of proteins themselves isn't understood, and may be beyond our modelling capabilities, and secondly that the AI-driven predictions over folding topologies provides accurate predictions but no causal mechanism.
Another space this resembles is inferrential statistics, which AI is in many regards an outgrowth of. Correlation is useful information, but it is not causation. Multi-stage gradient-desent AI is to a large extent more complext statistics ... but is that all it is, or is it something more, but still short of causal or explanatory mechanism? An emergent property of complex stats themselves?
The combinatorics provided a 'how': Given a problem in algebra, we could make up some combinatorics that describe the system, and then prove some theorems that tell you precisely how to manipulate the system to get the answer you want. But this 'how' doesn't necessarily say much about 'why' such a solution must exist.
On the other end, we had representation theory, which gave a sort of algebraic reason for the solution to exist, but would give you no help in how to actually construct the solution.
It was extremely common for interesting problems to have difficulty on one side or the other. You've got a 'how' and spend dozens of grad-student-years (GSYs) trying to discover the 'why', or the reverse.
Likewise, ML is giving us relatively cheap answers to the 'how' question. How do proteins fold? Well, now we have better answers. And those answers, carefully studied, should help with the 'why'. Now, instead of having just a start structure and an end structure, you /should/ be getting an explicit sequence of moves from the neural network. Studying /why/ some moves are better than others should yield progress on the overall why question.
Along the way, getting answers to 'why' should help constrain the search space for AlphaFold, and allow it to come up with better answers faster.
This basic question going forward is 'how do we distill crystallized knowledge from this black box algorithm?' It's going to be an important question in a range of sciences, basically anywhere that data >> knowledge. Finding answers won't be easy, but, hey, no one said good science has to be easy...
I apply this to a lot of fields, and it brings a better level of understanding.
For example, knowing that storing text in MySQL as utf8mb4 is better than storing it as utf8 will get you a job. But knowing why storing text as utf8mb4 is better than utf8 will build you a career.
Sure, I can. I just did. Look at the comment you replied to. It does exactly that.
Physics-wise, there will be some degree of resonance between any two circuits that aren’t an exact mirror of each other.
And mirror circuits can be compared most easily of all.
In the case of folding, when we ask "why" we might be asking why it folds one way and not the other. Causation has consequences, like for example we might want to know whether a small change to one part of the protein will significantly change how it folds. We might say some amino acids cause the fold and others don't.
If you understand the mechanism behind something, you can control it better.
Or, if you're interested in machine learning, you could ask what changes to the neural network make prediction better or worse and which parts of it matter the most, when it might make mistakes, and so on.
Just like we try to understand human behaviour not by understanding individual neurons, we reason on an abstract level, called psychology.
To take your example building a SDC will teach us which aspects are the most complex, and which solutions work better than other for those aspects. Comparing those solutions will allow us to see what working solutions have in common, and give us theories that we can then test.
Unless there has been some breakthrough that I'm not aware of this isn't true.
It's not level 5 open world "works in all conditions" self-driving - but neither is it level 0 no automation whatsoever.
Nor can humans drive unconditionally either.
* What is understanding?
* How do I know that I understand?
Sooner or later you realise that all theories of knowledge have one fundamental flaw - the problem of criterion. And if you so choose you can dismantle any argument with the Münchhausen trilemma. Socrate's favourite trick.
I like Feynman's answer best: "What I cannot create, I do not understand.", because in a round-about way it lands us squarely at the Turing test.
Could we ever know what Consciousness is unless we create it?
Though ability to "act upon it" might be slightly better
Knowledge is knowing there are icebergs in the water, comprehension is knowing you should slow down
Me: Why is "there is an iceberg in the water" knowledge?
You: "because it corresponds to reality".
Me: That makes it a fact, not knowledge.
That it is yet unsettled as of 2019 is also a fact.
Even worse than the Gettier problem, the proposition "Tomorrow I may or may not die." satisfies JTB, rendering it useless.
Lets just say that I don't know what knowledge is, but if it's not useful - it's not knowledge.
All Philosophical debates are a form of Kobayashi Maru. There is no "right" answer by design. The purpose is to make you conceptually understand the problem.
Auden used it:
And when he occupies a college, Truth is replaced by Useful Knowledge;
Some times drawing the distinction is necessary, some times it's not.
Conceptually (for one's own intellectual benefit), recognising the distinction between know-what, know-why and know-how is important. https://en.wikipedia.org/wiki/Know-how
This is the millennia old diasctintion between knowledge and understanding (or "wisdom"). I'm pretty sure you could already find it in Homer, the Bible, Mahabharata, and the Epic of Gilgamesh...
"He who thinks, then, that he has left behind him any art in writing, and he who receives it in the belief that anything in writing will be clear and certain, would be an utterly simple person, and in truth ignorant of the prophecy of Ammon, if he thinks written words are of any use except to remind him who knows the matter about which they are written.
Writing, Phaedrus, has this strange quality, and is very like painting; for the creatures of painting stand like living beings, but if one asks them a question, they preserve a solemn silence. And so it is with written words; you might think they spoke as if they had intelligence, but if you question them, wishing to know about their sayings, they always say only one and the same thing. And every word, when once it is written, is bandied about, alike among those who understand and those who have no interest in it, and it knows not to whom to speak or not to speak; when ill-treated or unjustly reviled it always needs its father to help it; for it has no power to protect or help itself."
The irony of my being able to present this only due to the nature of writing cannot be overstated. At the same time it's the exact scenario described here. The nature of understanding vs the nature of knowing, simply replacing 'computer' with 'one who genuinely knows.' The writings of Socrates/Plato really are quite remarkable.
It's like the tech industry likes sci-fi stories, but doesn't learn from any of their morals.
We have understanding of gravity, as Newton's laws explain the behaviour of everything that falls to anything else. That's compression: You don't need a long list of measurements of falling things, you just need a much shorter list of starting conditions. Newtons laws will then decompress the long list of measurements from it.
So what does it mean for protein folding? We need more compression so the knowledge is understandable on the human scale? Or AI needs less compression, so it is reasoning in a better way for this problem. Isn't that just a more abstract way for saying AI is smarter than we are.
Usually one understands LA when one has memorized many different theorems of LA, have perhaps memorized how to derive some from others, or even the same theorem in several different ways. That doesn't even begin to cover establishing links between LA and other branches of mathematics, all of which a proficient mathematician memorizes individually, and memorizes connections between the links. The mathematicians mental knowledge graph is compressed, but not that much.
I would consider the axioms as necessary but incomplete knowledge. You need more knowledge to apply these axioms.
A bit like the axioms are the parts of a car. You can use the car schematic and the parts to create a car. Or you can generate the missing knowledge of the schematic by spending lots of time combining random parts.
I am a bit out of my depth here, but maths can be reformulated. I presume like Minkowski did with Einstein's relativity. Or Riemann with integration. The result of the reorganized knowledge is deeper understanding.
There is both compression and correction going on, and even correction can be seen as compression by removal of special cases.
If you have no understanding at all, the only possible implementation is an infinite lookup table. When you start understanding commands better, you can implement parts better.
Furnishing your brain with a library of facts on a subject means it now has more items to make connections between, which is what allows understanding to develop.
These days, when I’m learning something new, I’ll try to accumulate as many basic facts as possible. Subjects that seem impenetrable can often be conquered in this way. The more connections you can make, the faster you can develop an understanding, and the more creative you can eventually become.
I understand the US president must stand down after two terms. I know this happens because of my knowledge of the US constitution.
I also understand the US constitution is a set of rules. And I know the vague historical reasons why they were formed.
Understanding and knowledge are, to me, in a constant cat-mouse chase: "okay, this happened - but why? Ah, it's because this happened - but why did that happen? Ad infinitum".
It's cause and effect until you lose interest and are happy saying 'just because it's like that'.
Edit: Can the editor tweak the title? It's not just a little confusing to me, others here have mentioned the same issue.
The other ill of much of high-school and some engineering math teaching is not having adequate focus on "how to apply" the math in modeling real world problems. That, in turn, exacerbates further stages of learning the deeper "why" questions.
Besides, "getting used to" part is apparently inescapable, at least at some point. After all, as great a mathematician as John von Neumann said "Young man, in mathematics you don't understand things. You just get used to them."
I bet I could find a fields medalist who barely knows what a category is.
Idk if its because I've been reading the word "right-wing" a lot or what it is but when I saw "right-knowledge" I thought it was a specific term when it seems like they're using it as an em-dash even though the unicode for the character is for en-dash.
I get that its harder to type a dash, but c'mon, for an official online publication their publication software should have some easy shortcut for it.
Not as distinguishable as an em dash but much better than an en dash with no spaces.