Also, why is it considered blogspam for having excerpts from an original article? I also see this occur frequently on HN. The article not only has excerpts, but comments on the excerpts and links to related material. This appears useful to an interested reader.
What an incredibly inadequate argument. Obviously, none could have done so in the past. Because the required resources simply weren't available. Breakthroughs are being made right here in this era, with Jeff Hawkins' being one of the latest.
This guy reminds me of Richard Smalley, who, against better judgment, attempted to make irrational arguments against molecular nanotechnology, as envisioned by Drexler.
Also, he can't be a software pioneer. He's too damn young.
Mathematics has the rule of thumb that anything named after someone was not discovered by them. It should equally well apply to all fields of human endeavour.
Until we get rid of that myth we will never figure out how creativity actually works.
I mean, that's probably true.
That's why we have big corporations with multi-billion dollar R&D budgets for large projects. A single human brain probably couldn't code all of Gmail, let alone an AI.
https://en.wikipedia.org/wiki/List_of_pioneers_in_computer_s... (spans 500 BC to 2011 AD)
https://en.wikipedia.org/wiki/Computer_Pioneer_Award (awarded each year)
2) That current AI is just very primitive NNs, and all the hoopla is mostly hype (like Big Data, Grid Computing, Biotechnology, Nanotechnology, Virtual Reality, 3D Printing, and several other previous fads served as revolutionary and the cure for everything)
3) That they are several orders of magnitude more primitive than a human brain.
4) That we haven't had Moore-law style increases in CPU power for quite some time.
5) That, never mind AI, we can't even make a good email app...
6) That all kind of peddler make good money by tooting AI (e.g. IBM selling BS tech as "Watson")
Though note that you couldn't come up with an argument against the future of AI development in six points. They were all independently irrelevant, like denying future medical breakthroughs because doctors didn't wash their hands 100 years ago. Your points are all distracted, like suggesting Elon Musk's tweeting habits today doom all future humans from ever reaching Mars.
As a scientist, I don't believe anything is "inevitable" (that's for religious people to believe).
I can very much imagine scenarios where we're already on the steep end of a technological curve, having exhausted most low hanging fruit, and failing to make many more revolutionary breakthroughs (no matter the timeframe).
>Though note that you couldn't come up with an argument against the future of AI development in six points. They were all independently irrelevant, like denying future medical breakthroughs because doctors didn't wash their hands 100 years ago.
I didn't want to make an argument against the possibility or impossibility of AI in abstracto (whether it's physically possible or not. Since our brain does it, it is physically possible. That's boring, again left to religious people to discuss, of which there's no shortage, even among the atheists). I wanted to make one about its _actual_ possibility, given what we know of our technological development, past claims, trends, etc.
If someone says general AI is possible because consciousness is not a magical god-given treasure, I'm allowed to ask what is is instead. That would give me a model to at least see how someone would approach it, something to falsify against.
General AI is to current AI what flying saucers are to airplanes: if you look at the latter it's easy to envision the former, and seeing the technological advances it seems inevitable. On closer inspection though, a whole new sets of concepts--some very improbably--are needed.
Flying saucers would need some gravity cancellation property. Not impossible by itself, but far from a natural evolution of aircraft.
By analogy, general AI would need a workable model of consciousness. Again, not impossible by itself, but also not a natural consequence of current advances in the field of AI.
I envision bundles of neurons that begin their existence mostly disconnected... These bundles are described somewhere in DNA (but not their ultimate learned configuration) and so is whatever learning algorithm drives this process.
We will have AGI when we understand the underlying learning algorithm our brain uses... Personally, I believe that algorithm involves fantasizing future scenarios and learning from the fantasies.
Can we stop the Feynman had an IQ of 126 narrative? Many people doubt his IQ was in the 120 range. IQ scores, despite what people thing, are not always accurate. Feynman is an example of such a case. Plus, you'd typically report a range on IQ exams. Given a 5+/- point range that could place him in the 131 category, which still feels too low for someone as brilliant as he was.
>"Feynman received the highest score in the country by a large margin on the notoriously difficult Putnam mathematics competition exam, although he joined the MIT team on short notice and did not prepare for the test." 
Someone that does that, most likely does not have a 126 IQ. Given his accomplishments in Physics, it is likely he is more intelligent than any IQ test gave him credit for.
We're not even close to having a halfway decent metric for this catchall intelligence term - and these engineers keep failing to model it in a computer program. I see kind of a pattern here.
I don't know enough about general artificial intelligence, but his claim seems to hinge a lot on what we already know. There are things we may not know yet that will lead to us generating general artificial intelligence. Although I don't think it'd be required, it may turn out that the experience of consciousness isn't something we can mechanically produce but is a property of matter itself. Now, I can't prove that is so but it's a fun little hypothesis I enjoy thinking about. None the less, there's no particular reason to disbelieve why humans couldn't figure out similar knowledge and apply it to creating better artificial intelligence.
Some of his reasoning is just flat out fallacious:
> An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself. This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so.
Yet out of all those billions of brains, how many of them have tried developing general artificial intelligence, let alone the "AI" of an if-then-else statement? Probably 0.00001% of those billions of brains. Yes, I made that number up, but it wouldn't be surprising for that number to be minuscule, below a fraction of a fraction of a percent.
Evolution has had billions of years. We've only had computers for less than a century. In that respect, it's not at all surprising that humans haven't mastered what nature unintelligently developed for much longer.
> In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems.
That depends on how you define the word "general". I don't think that scientists and engineers working on intelligence are actually using the word "general artificial intelligence" in the sense that such intelligence could solve literally any problem. Humans are generally intelligent within a set of fixed domains, but that doesn't mean they don't have general intelligence within certain constraints.
That speaks to problems beyond "use our brains as a template".
There's no contract signed with the universe that we will be able to design ever faster computers forever (or even just for 50-100 years more).
Don't forget that people used to believe that the heart contained the mind and the brain was just a cooler. There was real evidence that caused scientific consensus to change.
> It's an assumption that has been tested many times.
But it does not make that assumption.
At any rate, correlation does not imply causation.
Reason is just a category of how we see the world, not necessarily some ultimate truth.
But, we shouldn't prefer an incorrect model to no model, and the brain = mind model does not seem correct.
> So, the objection that humans cannot solve every problem only shows that humans might not be complete halting oracles, but cannot show that humans are not partial halting oracles.
I don't get it. We can already easily making Turing machines that answer the halting problem for certain infinite sets of Turing machines, so there's nothing unique about the human brain in this regard.
> We can even remove an infinite number of problems from the set and still have an infinite and undecidable set.
It is an undecidable set, so cannot be described by a TM.
So then, the claim is that it's at least conceivable that humans can do this, because you can remove a finite or even infinite number of elements from an infinite set and still be left with an infinite set. I still don't understand how this is a remotely useful insight or in any way an indication that humans may have super-Turing abilities.
But it seems extremely likely to me that some arrangement of material could create a mind that, if not exactly human, we would recognize as intelligent and useful.
If you're skeptical of the materialist assumption, a productive line of inquiry is to investigate what capabilities of a mind can or cannot be replicated by computers or other material devices. That probably looks a lot like doing AI research.
People have made plausible attempts to approximate SI with practical amounts of computing, like AIXI (https://arxiv.org/abs/0909.0801).
SI is a formalization of this general capability. Perhaps a finite TM can approximate SI to some degree, but to what degree, and is this degree equivalent to human capability? I don't know of any evidence of the latter. If you do, let me know!
(Not that I personally believe that it's anything but a meat computer.)
That the mind is created in by another piece of matter, other than the brain ?
Or that there is "something" that is not matter (that incidentally produces the brain)?
What is weird is assuming the existence of something that is not matter so that by definition cannot be observed or measured.
Personally I love to assume there is a teapot orbiting Saturn (cf Bertrand Russell)
Can you disprove that ?
It's an argument against the brain-in-a-jar trope, and I think it's an empirical argument, unlike Russell's teapot.
On this side there is a strong tendency to be blinded by hype, underestimate problems or wish them away with naive desperation that things will just happen based more on hope than reality.
We saw that with the self driving crowd underestimating the problems and overestimating the existing technology, and now with the AI folks, who are already guilty of perpetuating hype knowing fully well that their extremely limited definition of 'AI' is nothing like what the world understands by AI.
All the privacy shenanigans, crypto disaster, self driving cars and AI leave the reputation of tech community in a very bad place.
I don't think the technology required to build superhuman AGI exists yet at least at a reasonable cost to humanity. I don't think we will reach this level of massively parallel computing required for Superhuman AGI until maybe 2070 and that's only if we bring together all the world's supercomputers together in one giant artificial brain.
Just recently, within the last decade, we've seen incredible advances of the usefulness of existing ML/NN based approaches. We have phones and smart speakers which have sufficiently reliable speech recognition. Ditto for vision tasks.
What's different now compared to decades past is that businesses can see the benefit of incremental improvements in existing systems. If Siri can easily be demonstrated to be more useful than Cortana, that is a significant competitive advantage, which will sell products and services.
The tech giants (and others) see this, so they will continue to invest. This isn't like in past where we tried things here and there, they didn't work as well as we had hoped, and then we stopped investing.
The pressure is on all the tech giants to keep investing, and applying that research to more and more of their own businesses (like using ML to manage power and cooling at a data center).
We're on a roller-coaster ride into the future, and nothing short of worldwide disaster will stop it. For good or for bad.
The only reasonable definition I can come up with for this would be "More intelectually capable than all of human society. So, yeah.
(Any other definition would still allow humanity to outsmart it.)
This feels disingenuous. If other things are bottlenecking human success than intelligence (which isn't even entirely true), computers are generally better at these things, especially in an era of massive low-cost computing resources.
We have an example superintelligence - humans working together in groups such as corporations are a superintellgence.
So from that, the invention of a human-level AGI naturally leads to superintelligence - lots of the AGIs cooperating.
The question then becomes of whether we're going to get to AGI. Evidence points to yes. AGI (human intelligence) is a collection of abilities and the machine learning community is steadily making progress on them. Speech, vision, machine translation, question answering, summarization etc. are all being worked on and steady progress, or in many cases - rapid progress, is being made.
Unsupervised learning and reinforcement learning are the frontiers and both have advancements in just the past couple of years (GANs, predictive learning, inverse reinforcement learning, imitation learning, domain randomization).
Unsupervised learning in particular is likely the key to AGI and only recently has significant progress been made on it - predictive learning (as Yann Lecun calls it).
(Personal conjecture - the next couple of years when a larger number of people investigate predictive learning might lead to AGI - in maybe just 2 years).
Wasn't there an assertion, mentioned several times here on HN, that researchers have found that with AI the seemingly hard becomes easy (e.g. playing chess) and the seemingly easy becomes hard (e.g. walking or... becoming bored with chess)? It now seems qualitatively harder to design a program capable of boredom than one capable of being a chess master.