ravitation 10 days ago [-]
Discussion of the original article on HN: https://news.ycombinator.com/item?id=15788807
dang 10 days ago [-]
We changed the url from https://mindmatters.today/2018/11/software-pioneer-says-gene..., which simply copies excerpts from this.
yters 10 days ago [-]
If the original article is a year old, why was this killed as dupe? I've seen old articles reposted on HN before. This repost seemed to be pretty popular.

Also, why is it considered blogspam for having excerpts from an original article? I also see this occur frequently on HN. The article not only has excerpts, but comments on the excerpts and links to related material. This appears useful to an interested reader.

whyonearth 10 days ago [-]
What's wrong with the website's editor, posting blogspam of a year-old post as current news? Should be flagged.
Rallerbabs 10 days ago [-]
Quote: "An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself. This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred."

What an incredibly inadequate argument. Obviously, none could have done so in the past. Because the required resources simply weren't available. Breakthroughs are being made right here in this era, with Jeff Hawkins' being one of the latest.

This guy reminds me of Richard Smalley, who, against better judgment, attempted to make irrational arguments against molecular nanotechnology, as envisioned by Drexler.

Also, he can't be a software pioneer. He's too damn young.

Sharlin 10 days ago [-]
Wow. That argument can be applied to any invention or creation. There are bad arguments and then there are ridiculous ones.
baddox 10 days ago [-]
Yes. The argument is logically equivalent to "nothing can ever happen for the first time."
antt 10 days ago [-]
He is merely attacking the lone inventor myth. While it is good for putting names in the history books it is completely inadequate for actually deciding who invented something.

Mathematics has the rule of thumb that anything named after someone was not discovered by them. It should equally well apply to all fields of human endeavour.

Until we get rid of that myth we will never figure out how creativity actually works.

Upvoter33 10 days ago [-]
This quote stuck out for me too, and is clearly, as you say, "inadequate". Also, no invention comes from a single brain; the biggest software systems we build, for example, are designed by many, many people. No one person can even understand something as complex as some of our large-scale software systems. And yet, we build them and they work.
the_af 10 days ago [-]
In the full article, the author addresses this. He argues a human's intelligence is more externalized in society than physically located within his/her brain.
randyrand 10 days ago [-]
In a similar vain, it is impossible for a human to design something that is faster than itself at moving. This is why cars are limited to human running speed. /s
ceejayoz 10 days ago [-]
> An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself.

I mean, that's probably true.

That's why we have big corporations with multi-billion dollar R&D budgets for large projects. A single human brain probably couldn't code all of Gmail, let alone an AI.

sullyj3 10 days ago [-]
For me, it's at least refreshing to see one that doesn't rely on the implicit hazy assumption that biological intelligence is magical.
wilg 10 days ago [-]
While I agree with you, I'm not sure what youth has to do with being a pioneer or not.
rrauenza 10 days ago [-]
I suspect it has less to do with the age of the pioneer, but rather with the perception/opinion of what decade the pioneering happened.
marcosdumay 10 days ago [-]
Software pioneering happened at the late 50's to the early 70's. Does the guy look like he was an adult at the early 70's?
wilg 10 days ago [-]
Oh, this is a specific definition I've never heard of. Is that common? Some quick Googling suggests that people use the term "pioneer" in relation to computer science in a broader sense.

https://en.wikipedia.org/wiki/List_of_pioneers_in_computer_s... (spans 500 BC to 2011 AD)

https://en.wikipedia.org/wiki/Computer_Pioneer_Award (awarded each year)

ocfx 10 days ago [-]
Glad I'm not the only one who had a problem with that reasoning.
fizx 10 days ago [-]
"When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong." ~Arthur C. Clarke
coldtea 10 days ago [-]
Yeah. Here we are in those rare cases when he is right (though not necessarily for the arguments he makes)
worldsayshi 10 days ago [-]
So what are the right arguments?
coldtea 10 days ago [-]
1) That the rise of AI has been predicted again as "coming soon" in the 60s and 70s and nothing came out of it once already.

2) That current AI is just very primitive NNs, and all the hoopla is mostly hype (like Big Data, Grid Computing, Biotechnology, Nanotechnology, Virtual Reality, 3D Printing, and several other previous fads served as revolutionary and the cure for everything)

3) That they are several orders of magnitude more primitive than a human brain.

4) That we haven't had Moore-law style increases in CPU power for quite some time.

5) That, never mind AI, we can't even make a good email app...

6) That all kind of peddler make good money by tooting AI (e.g. IBM selling BS tech as "Watson")

wild_preference 10 days ago [-]
If you can agree that intelligence is just an information processing challenge instead of a magical god-given treasure, then general AI is inevitable.

Though note that you couldn't come up with an argument against the future of AI development in six points. They were all independently irrelevant, like denying future medical breakthroughs because doctors didn't wash their hands 100 years ago. Your points are all distracted, like suggesting Elon Musk's tweeting habits today doom all future humans from ever reaching Mars.

coldtea 10 days ago [-]
>If you can agree that intelligence is just an information processing challenge instead of a magical god-given treasure, then general AI is inevitable.

As a scientist, I don't believe anything is "inevitable" (that's for religious people to believe).

I can very much imagine scenarios where we're already on the steep end of a technological curve, having exhausted most low hanging fruit, and failing to make many more revolutionary breakthroughs (no matter the timeframe).

>Though note that you couldn't come up with an argument against the future of AI development in six points. They were all independently irrelevant, like denying future medical breakthroughs because doctors didn't wash their hands 100 years ago.

I didn't want to make an argument against the possibility or impossibility of AI in abstracto (whether it's physically possible or not. Since our brain does it, it is physically possible. That's boring, again left to religious people to discuss, of which there's no shortage, even among the atheists). I wanted to make one about its _actual_ possibility, given what we know of our technological development, past claims, trends, etc.

craigsmansion 10 days ago [-]
Okay, it's a magical god-given treasure, which is as good a definition as any in this context, which illustrates the problem. When most talk about general AI, what they talk about is artificial consciousness. That's fair enough, but you can't build what you don't understand.

If someone says general AI is possible because consciousness is not a magical god-given treasure, I'm allowed to ask what is is instead. That would give me a model to at least see how someone would approach it, something to falsify against.

General AI is to current AI what flying saucers are to airplanes: if you look at the latter it's easy to envision the former, and seeing the technological advances it seems inevitable. On closer inspection though, a whole new sets of concepts--some very improbably--are needed.

Flying saucers would need some gravity cancellation property. Not impossible by itself, but far from a natural evolution of aircraft.

By analogy, general AI would need a workable model of consciousness. Again, not impossible by itself, but also not a natural consequence of current advances in the field of AI.

dicroce 10 days ago [-]
There is an estimated 100 billion nuerons in the human brain. We know from information theory that DNA is not nearly long enough to describe all of those connections. This means that most of the complexity of the brain is emergent.

I envision bundles of neurons that begin their existence mostly disconnected... These bundles are described somewhere in DNA (but not their ultimate learned configuration) and so is whatever learning algorithm drives this process.

We will have AGI when we understand the underlying learning algorithm our brain uses... Personally, I believe that algorithm involves fantasizing future scenarios and learning from the fantasies.

onemoresoop 10 days ago [-]
Fantasy is a mental computation as well. Fantasy AI ftw!
thrwaway22 10 days ago [-]
"Feynman reported 126, James Watson, co-discoverer of DNA, 124 — which is exactly the same range as legions of mediocre scientists."

Can we stop the Feynman had an IQ of 126 narrative? Many people doubt his IQ was in the 120 range. IQ scores, despite what people thing, are not always accurate. Feynman is an example of such a case. Plus, you'd typically report a range on IQ exams. Given a 5+/- point range that could place him in the 131 category, which still feels too low for someone as brilliant as he was.

>"Feynman received the highest score in the country by a large margin on the notoriously difficult Putnam mathematics competition exam, although he joined the MIT team on short notice and did not prepare for the test." [0]

Someone that does that, most likely does not have a 126 IQ. Given his accomplishments in Physics, it is likely he is more intelligent than any IQ test gave him credit for.

[0] https://www.psychologytoday.com/us/blog/finding-the-next-ein...

mar77i 10 days ago [-]
We keep trying to teach machines this intelligence thing, while we're still measuring the real thing using IQ tests?

We're not even close to having a halfway decent metric for this catchall intelligence term - and these engineers keep failing to model it in a computer program. I see kind of a pattern here.

leesec 10 days ago [-]
Can we stop caring what people's IQ is? I've never encountered a scenario where it mattered.
axus 10 days ago [-]
IQ is just a number determined by a test, by definition. I don't think it's possible for a person's IQ number to be independent from the IQ test. Certainly he had more intelligent contributions to physics than expected from 126 IQ.
aetherson 10 days ago [-]
IQ is valuable insofar as it is a proxy for g. I think that your parent poster was suggesting that while IQ is in general correlated with g, there are cases where it is not.
fernly 10 days ago [-]
...or, the usual IQ tests of the 1950s did not adequately measure whatever Feynman was particularly good at.
ravenstine 10 days ago [-]
And yet general biological intelligence arose from unintelligent forces? Hm, interesting.

I don't know enough about general artificial intelligence, but his claim seems to hinge a lot on what we already know. There are things we may not know yet that will lead to us generating general artificial intelligence. Although I don't think it'd be required, it may turn out that the experience of consciousness isn't something we can mechanically produce but is a property of matter itself. Now, I can't prove that is so but it's a fun little hypothesis I enjoy thinking about. None the less, there's no particular reason to disbelieve why humans couldn't figure out similar knowledge and apply it to creating better artificial intelligence.

Some of his reasoning is just flat out fallacious:

> An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself. This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so.

Yet out of all those billions of brains, how many of them have tried developing general artificial intelligence, let alone the "AI" of an if-then-else statement? Probably 0.00001% of those billions of brains. Yes, I made that number up, but it wouldn't be surprising for that number to be minuscule, below a fraction of a fraction of a percent.

Evolution has had billions of years. We've only had computers for less than a century. In that respect, it's not at all surprising that humans haven't mastered what nature unintelligently developed for much longer.

> In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems.

That depends on how you define the word "general". I don't think that scientists and engineers working on intelligence are actually using the word "general artificial intelligence" in the sense that such intelligence could solve literally any problem. Humans are generally intelligent within a set of fixed domains, but that doesn't mean they don't have general intelligence within certain constraints.

gameswithgo 10 days ago [-]
Our brains already do it, likelihood is 100%. Nothing would stop transistor based computers from doing it, though they might be very much slower at it.
falcolas 10 days ago [-]
Have we even been able to start down this road though? We've been able to simulate neurons for quite some time (and even make computers that simulate them in hardware), but the best we can get them to do without significant intervention is categorize inputs. And even categorizing inputs requires a volume of inputs many times greater than that required by our own brains.

That speaks to problems beyond "use our brains as a template".

coldtea 10 days ago [-]
Well, a halt on Moore's law (which we're already seeing) can very easily "stop transistor based computers from doing it" in any realistic way.

There's no contract signed with the universe that we will be able to design ever faster computers forever (or even just for 50-100 years more).

yters 10 days ago [-]
How do we know our brain creates our mind? That is a materialist assumption, which is just an assumption.
yorwba 10 days ago [-]
It's an assumption that has been tested many times. Humans can survive losing parts of their brain, but it causes observable changes in their mind. That's were those colorful maps of brain regions and their functions come from: by recording what kind of brain damage caused what kind of function to deteriorate.

Don't forget that people used to believe that the heart contained the mind and the brain was just a cooler. There was real evidence that caused scientific consensus to change.

zamalek 10 days ago [-]
Dualist mind theory is not yet quack science, but it looks like it could be.

> It's an assumption that has been tested many times.

But it does not make that assumption.

yters 10 days ago [-]
There are some interesting examples of very highly functioning individuals with very little brain.

At any rate, correlation does not imply causation.

0xffff2 10 days ago [-]
What's your proposed alternative? I'm trying not to be dismissive, but I'm also having trouble coming up with a reasonable alternative hypothesis.
coldtea 10 days ago [-]
Why would an alternative hypothesis have to be "reasonable"?

Reason is just a category of how we see the world, not necessarily some ultimate truth.

yters 10 days ago [-]
I would say halting oracles are a possibility that have some degree of mathematical traction:

https://www.am-nat.org/site/halting-oracles-as-intelligent-a...

But, we shouldn't prefer an incorrect model to no model, and the brain = mind model does not seem correct.

baddox 10 days ago [-]
From that article:

> So, the objection that humans cannot solve every problem only shows that humans might not be complete halting oracles, but cannot show that humans are not partial halting oracles.

I don't get it. We can already easily making Turing machines that answer the halting problem for certain infinite sets of Turing machines, so there's nothing unique about the human brain in this regard.

yters 10 days ago [-]
The previous sentence to the one you quote explains:

> We can even remove an infinite number of problems from the set and still have an infinite and undecidable set.

It is an undecidable set, so cannot be described by a TM.

baddox 10 days ago [-]
Oh okay, I guess that entire paragraph is just a complicated way of rephrasing their claim, which is that there is at least one TM for which no TM can possibly answer the halting problem, but for which humans can.

So then, the claim is that it's at least conceivable that humans can do this, because you can remove a finite or even infinite number of elements from an infinite set and still be left with an infinite set. I still don't understand how this is a remotely useful insight or in any way an indication that humans may have super-Turing abilities.

yters 10 days ago [-]
That's correct. It does not prove that humans have this capability. It just shows that a common counter argument fails, and then the article goes on to explain how it would make an empirical difference if humans are partial halting oracles.
tlb 10 days ago [-]
The human mind certainly involves systems other than the brain, like the limbic system. Even the face is involved in communicating emotion between various parts of the brain, which is why you can make yourself happier by smiling.

But it seems extremely likely to me that some arrangement of material could create a mind that, if not exactly human, we would recognize as intelligent and useful.

If you're skeptical of the materialist assumption, a productive line of inquiry is to investigate what capabilities of a mind can or cannot be replicated by computers or other material devices. That probably looks a lot like doing AI research.

yters 10 days ago [-]
Yes, for example Solomonoff induction (SI) requires infinite computational capacity. If SI is representative of the human mind, then the human mind cannot be physical.
tlb 9 days ago [-]
The human mind isn't exactly Solomonoff, because we often hold theories that are more complicated than necessary. Working towards simpler scientific or mathematical theories takes decades. So it seems like we don't just have SI in the inner loop.

People have made plausible attempts to approximate SI with practical amounts of computing, like AIXI (https://arxiv.org/abs/0909.0801).

yters 9 days ago [-]
It is not so much whether we identify the simplest theory, but that we are so good at inferring principles and predicting, whether in science or in day to day life.

SI is a formalization of this general capability. Perhaps a finite TM can approximate SI to some degree, but to what degree, and is this degree equivalent to human capability? I don't know of any evidence of the latter. If you do, let me know!

Sharlin 10 days ago [-]
If we can’t assume physicalism, we can’t assume anything. The sun might not rise tomorrow because it might be magicked away.
yters 10 days ago [-]
Can you explain why prediction is predicated on physicalism? For example, Hume argues the opposite, and his argument is furthered by Wolpert's No Free Lunch Theorem. It seems like if prediction is possible it is only because physicalism is false.
10 days ago [-]
worldsayshi 10 days ago [-]
It's amusing how we in 2018 still can't fully rule out that the brain depends on magic to work.

(Not that I personally believe that it's anything but a meat computer.)

joak 10 days ago [-]
What do you mean ?

That the mind is created in by another piece of matter, other than the brain ?

Or that there is "something" that is not matter (that incidentally produces the brain)?

What is weird is assuming the existence of something that is not matter so that by definition cannot be observed or measured.

Personally I love to assume there is a teapot orbiting Saturn (cf Bertrand Russell)

Can you disprove that ?

the_af 10 days ago [-]
I cannot answer for the parent poster (actually, I cannot answer at all :P ), but the author of TFA posits that the mind is created -- or co-evolved with -- not by the brain alone, but also by the body, the environment and even by society itself.

It's an argument against the brain-in-a-jar trope, and I think it's an empirical argument, unlike Russell's teapot.

throw2016 10 days ago [-]
It seems more likely more fundamental innovations are going to come from academia with a culture of seriousness and decades of work without returns.

On this side there is a strong tendency to be blinded by hype, underestimate problems or wish them away with naive desperation that things will just happen based more on hope than reality.

We saw that with the self driving crowd underestimating the problems and overestimating the existing technology, and now with the AI folks, who are already guilty of perpetuating hype knowing fully well that their extremely limited definition of 'AI' is nothing like what the world understands by AI.

All the privacy shenanigans, crypto disaster, self driving cars and AI leave the reputation of tech community in a very bad place.

xutopia 10 days ago [-]
I too have some resistance to the notion that we will see AGI at a higher level than humans within 50 years. I don't think the arguments he has make any sense. The technology simply did not exist and we're improving on so many fronts mostly because we're now capable of yielding huge neural networks we couldn't fathom having access to just a few years ago.

I don't think the technology required to build superhuman AGI exists yet at least at a reasonable cost to humanity. I don't think we will reach this level of massively parallel computing required for Superhuman AGI until maybe 2070 and that's only if we bring together all the world's supercomputers together in one giant artificial brain.

ansible 10 days ago [-]
I think we've seen the cross-over point for AGI in the last few years.

Just recently, within the last decade, we've seen incredible advances of the usefulness of existing ML/NN based approaches. We have phones and smart speakers which have sufficiently reliable speech recognition. Ditto for vision tasks.

What's different now compared to decades past is that businesses can see the benefit of incremental improvements in existing systems. If Siri can easily be demonstrated to be more useful than Cortana, that is a significant competitive advantage, which will sell products and services.

The tech giants (and others) see this, so they will continue to invest. This isn't like in past where we tried things here and there, they didn't work as well as we had hoped, and then we stopped investing.

The pressure is on all the tech giants to keep investing, and applying that research to more and more of their own businesses (like using ML to manage power and cooling at a data center).

We're on a roller-coaster ride into the future, and nothing short of worldwide disaster will stop it. For good or for bad.

worldsayshi 10 days ago [-]
> Superhuman AGI

The only reasonable definition I can come up with for this would be "More intelectually capable than all of human society. So, yeah.

(Any other definition would still allow humanity to outsmart it.)

iotb 10 days ago [-]
To make an accurate projection or statement about something, it is generally necessary that you have a sound understanding of it. This is just my general intelligence speaking here. I see nothing of note that reflects that this person has a fundamental understanding of General Intelligence or has even been in the pursuit of it. Authoring an optimization algorithm suite doesn't mean you have an understanding of General Intelligence. A non-read before even clicking on the link. Two posts today about AGI. Seems the next hype train is arriving right on schedule and its banner will be (AGI).
baq 10 days ago [-]
the meat computer hidden in your head running on ~20W of power does it. why wouldn't an electronic computer be incapable of the same feat is beyond me. make it 20kW or 20MW if 20W sounds unreasonably low.
josquindesprez 10 days ago [-]
> There is no evidence that a person with an IQ of 170 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 130.

This feels disingenuous. If other things are bottlenecking human success than intelligence (which isn't even entirely true), computers are generally better at these things, especially in an era of massive low-cost computing resources.

overlords 10 days ago [-]
We have an example AGI - the human brain.

We have an example superintelligence - humans working together in groups such as corporations are a superintellgence.

So from that, the invention of a human-level AGI naturally leads to superintelligence - lots of the AGIs cooperating.

The question then becomes of whether we're going to get to AGI. Evidence points to yes. AGI (human intelligence) is a collection of abilities and the machine learning community is steadily making progress on them. Speech, vision, machine translation, question answering, summarization etc. are all being worked on and steady progress, or in many cases - rapid progress, is being made.

Unsupervised learning and reinforcement learning are the frontiers and both have advancements in just the past couple of years (GANs, predictive learning, inverse reinforcement learning, imitation learning, domain randomization).

Unsupervised learning in particular is likely the key to AGI and only recently has significant progress been made on it - predictive learning (as Yann Lecun calls it).

(Personal conjecture - the next couple of years when a larger number of people investigate predictive learning might lead to AGI - in maybe just 2 years).

commandlinefan 10 days ago [-]
I don't have the exact quote in front of me, but in 1979, Douglas Hofstadter wrote in his book "Godel, Escher and Bach" something to the effect of: "It may be that some day a computer can beat a human at chess, but when it happens, it will be by the sort of computer that will say, 'no, I'm bored with chess, I would rather talk about poetry'". If Hofstadter can be wrong about anything, anybody can be wrong about anything.
the_af 10 days ago [-]
Nice quote!

Wasn't there an assertion, mentioned several times here on HN, that researchers have found that with AI the seemingly hard becomes easy (e.g. playing chess) and the seemingly easy becomes hard (e.g. walking or... becoming bored with chess)? It now seems qualitatively harder to design a program capable of boredom than one capable of being a chess master.

baddox 10 days ago [-]
SlipperySlope 10 days ago [-]
His point was refuted long ago by Kurzweil who observed that only with the aid of computers can humans improve on current computing.