This article was viscerally and emotionally terrifying at the time. For whatever reason it seemed really real. I remember overhearing people talking about "grey goo" over cocktails at startup events -- the scenario here seemed real and imminent.
As usual, of course, as with most doomsday predictions, the actual future reality turned out to be equally terrifying and completely different.
(slightly tongue-in-cheek, slightly not).
Give the robots some time!
End result you end up building using organic chemistry which means your goo get's eaten.
That's what I think of when I hear the term 'grey goo.'
If humans are a collectivity, "we" right now have created the means for most people to live without working. But upshot of this hasn't been paradise but increasing privation in advanced nations as everyone still needs a conventional job for social and material survival but fewer and fewer people can find them (and the oversupply of workers reduces their price). Just as much, those who leverage the transformation of society using technology have benefited greatly but the resulting inequality of wealth distribution has produced tremendous despair for a significant percentage of the population with little indication that any further transformations are going to fix this (basic income is one effort but I don't see how this can happen without severe inflation, etc).
A similar unthinking dynamic can be seen in climate change and the effective total lack of serious response to it.
It will be a long time, if ever, before some AGI rules humanity. But the use of mechanical processes to control human behavior began not with AI, not with social media but longer-ago with mass media and it's certainly in high gear at the moment (in both mass and social media of course), allowing whatever well-connected actors to control the "news cycle" and guarantee the lack of coherent, collective discussion of the overall, serious problems of this society (as mentioned above and as most people can see in front of them).
Maybe for some, but certainly not for me. It was just another doomsday scenario. Just like the kind that have been cropping up in science fiction for at least a hundred years, and religious tracts for thousands of years before that.
"I remember overhearing people talking about "grey goo" over cocktails at startup events -- the scenario here seemed real and imminent."
Just because they talked about it doesn't mean they thought it was real or imminent. Honestly, I don't think most people were any more scared of it back then than now. Just something to consider, and maybe keep an eye on.
Don't lose hope about losing all hope yet! Apparently, "Scientists accidentally create mutant enzyme that eats plastic bottles. The breakthrough, spurred by the discovery of plastic-eating bugs at a Japanese dump, could help solve the global plastic pollution crisis"
A lot of things seem viscerally and emotionally terrifying after your fourth free drink on the rooftop at the Industry Standard.
It was never imminent. Drexler himself argued that there was a huge bootstrapping problem in creating nanoscale ‘assemblers’ with our macro scale technology.
Kurzweil wants to live forever. He wants to become a cybernetic being. That was his undoing in those books--his desire to outlast death forced him to make wildly optimistic predictions.
The dangers facing us are not AI but simply mundane databases. Everything is being tracked these days. Compliance is being shoved down our throats. You are being watched on the Internet and in daily life. "But I have nothing to hide." Yes, you do. You just don't realize it. You should read up on how people were treated in East Germany and the Soviet Union because that is your future. Technology gives zero shits about humans.
My premise for Bhopal 2.0 is we automate processes so much and have no accounting for waste and damage that we simply focus on the top number. Ask an ML to optimize for an output and it will, happily, All the while harming people around the process - automatically. Right now Bhopol 2.0 processes are all focused on information systems - abstractions, so it's much harder to point and say "look, this is dangerous". People do very, very, very badly with abstractions.
In some way this already happened with the recent elections. But that was purposely aiming the machines at us.
maybe I'm wrong, but I thought those predictions have fared pretty well.
edit to add:
optimism, or not... this came quickly (ahem, china)
 > 2019, Public places and workplaces are ubiquitously monitored to prevent violence and all actions are recorded permanently. Personal privacy is a major political issue, and some people protect themselves with unbreakable computer codes.
from his 1999 book, via wikipedia
The rest of my life quite literally flows from reading this essay over and over when it came out. Genuinely happy to see it here.
And this short story about cellular automata: https://archive.org/details/TrueNames
Can you explain what i'm missing? I'm young and non-american, maybe i'm missing context.
It's a fascinating line of thought--though I obviously don't support the actions taken by its author.
To clarify a little further, Bill Joy in this essay refers to Kurzweil who refers to Marvin Minsky's "Society of Mind." I became a technological utopian until reading Naomi Klein's "No Logo" later that year, as well as the book "Amusing Ourselves to Death" by Neil Postman.
This ultimately led me on a sort of existential quest that led to the book "Waking Up in Time" by Peter Russell and then I began to have a much more spiritual orientation towards life and reality and now try to be as practical as possible while working towards a positive technological future.
> Jacques Ellul (French: [ɛlyl]; January 6, 1912 – May 19, 1994) was a French philosopher, sociologist, lay theologian, and professor who was a noted Christian anarchist. (...) The dominant theme of his work proved to be the threat to human freedom and religion created by modern technology. Among his most influential books are The Technological Society and Propaganda: The Formation of Men's Attitudes.
I’ve just finished reading his “Illusion of politics”, where he explained (back in the 1960s) how the West lives in a technocratic world only focused on efficiency and why this causes politicians and political decisions (and ultimately democracy itself) to become sort of obsolete, because all the decisions taken by them (the politicians) should ultimately answer to only one and most important criteria: they should be efficient. And as only the technocrats can tell/approximate which decisions may or may not be efficient, the politicians end up being just “puppets” rubber-stamping the decisions actually taken by the technocrats.
I’m now just about to start reading his “Le bluff technologique” (it’s in French) and I’m very, very excited because of it. Again, I recommend Ellul to all those interested about us, humans, and about how we function, reading him reminded me of when I first read Hobbes, Rousseau or Tolstoy, i.e. other great literary and philosophical figures from the past that did a great job of describing how our species “operates”. Only that Ellul exposes us confronting this new and modern world, which said authors didn’t have the chance to do.
Later edit: For those down-voting this, I urge them to reconsider. Not the down-voting itself, which I don’t care about, but Jacques Ellul himself. Someone mentioned Ted Kaczynski, it turns out Ellul was his favorite philosopher (https://thesocietypages.org/cyborgology/2012/06/08/the-unabo...). I’ve just finished the introduction of the book I said I was about to start reading, and it’s haunting how he prophetised the “missed chance” represented by the dream of a de-centralized society promised at some point by modern computer networks. He “warned” us (in the 1970s and the 1980s) that we had a very short window of opportunity for making this new technology work in our best interest by making things less centralized. It turns out both the www and more recently crypto-currencies are such missed chances.
> Kaczynski claimed in all humility that half of what he read in The Technological Society he knew already; he discovered in Ellul a soul mate rather than a teacher. “When I read the book for the first time, I was delighted,” he told a psychiatrist who interviewed him in jail, “because I thought, ‘Here is someone who is saying what I’ve already been thinking.'”
Ellul was an interesting thinker. But, to my knowledge, he never advocated violence. I'm not sure where Kaczynski got the idea for using violence to achieve his ends, but it wasn't from Ellul.
Furthermore, lets say, we become part of the machines (a la Greg Bear's EON / Way universe). The line of thinking and questioning starts to move towards resources. These incredible advances in machines, it seems would be accompanied by incredible advances in resource utilization. The notion of poor vs rich entities would be completely different, in relative terms, than what we think of them today.
Our insignificance relative to the machines might not matter in the slightest. fwiw - worth reading David Brin's uplift books - in those the machines seem to generally stay the heck away from bio life forms :)
The real danger I see comes from ownership. ML is good at predicting things. Nano tech along with genetics will make you immortal and have near magical abilities, and currently only a handful of people will own these technologies.
We are much closer to any cyberpunk dystopian than a robot revolution. In a way it’s always been like that with capitalism, if we don’t regulate the free market, then the free market turns people into slaves - and right now we’re letting the market regulate both society and tech.
One of my favorite quotes from the article which really has informed a lot of my thinking later:
"Clarke continued: "Looking into my often cloudy crystal ball, I suspect that a total defense might indeed be possible in a century or so. But the technology involved would produce, as a by-product, weapons so terrible that no one would bother with anything as primitive as ballistic missiles."
In 1994 he was convinced that the kinds of "compromises" that James Gosling were putting into Java guaranteed it would be dead on arrival. He wasn't wrong, those choices would ultimately limit the language (and they have) but he completely missed the 20 years between then and now where Java would have a huge impact.
When this editorial came out I had moved on from Sun and was dealing the leading edge of what would be the dot.com implosion shockwave and now Bill was telling us it was all pointless, the world would probably die on its own desire to create cool new things. Well he wasn't saying it was pointless per se, he was saying we needed to confront the ethics of what we were doing now instead of in the middle of the crisis. And there is much to like about that, but recall that Facebook was created in a dorm room, not a laboratory like Bell Labs or Sun Labs. So there was no oversight, no 'adult supervision' of people who would ask, as Bill would have, what happens when ...?
So to understand Bill's essay in context I have to ask, "What would he have said to Mark Zuckerberg?" I don't doubt for a moment that had Mark confided in him his vision and his plans, that Bill would have foreseen the size and extent of its impact. Bill is a guy who made more money on Microsoft Stock than on Sun Stock because he sold the latter and bought the former, recognizing that at the end of the day Microsoft would have a larger impact. So what does he do? Does he convince Mark to throw it away? Does he say "You will be one of the richest people in the world but you'll have created a tool that nation states will use to undermine democracies around the world?" And how does Mark respond to that? Probably, "If not me, someone else will figure this out. Look at myspace.com, I'll take the money and figure out the rest after it becomes a problem."
The future doesn't need us, and neither does the present. It is the ultimate hubris of humans from the beginning of time that they are somehow "more special" than the rest of the machine that is the universe. When you read books like "The Vital Question" you might be struck that humans are just a 'step in the path' rather than the starting or ending point of that path. You can imagine self aware machines arguing over the notion that they evolved from meat.
The power of Bill Joy for me has always been his willingness to say something outrageous that was the logical extension of a path through the point of absurdity. And in that moment stretching the pre-conceptions of the people hearing him such that they were able to think of something new that previously they would not allow themselves to think it. I've felt it first hand and seen it in happen in others. The after the meeting discussion that goes "That was the craziest thing I think I've ever heard, but something that might not be crazy is if we did this ..."
That's an interesting revision of history to fit a narrative.
Most servers were migrated from Sun to Linux (with minor hiccups). Sun basically set the standard for security and reliability that led to the successor instead of OS/2 or MacOS or ReactOS or whatever. Microsoft HAD to change their OS to meet the needs of the users (ANY security, more interoperability between ()nix and windows, etc).
The statement is either poorly phrased or wholly inaccurate, depending on your point of view.
Between 1988 and 1998 Sun stock increased 10x and Microsoft stock had increased 100x. By converting 15% of his Sun holdings into Microsoft stock he outperformed his Sun holdings.
We used to say that artificial general intelligence (AGI) was 20 or 30 years away. Now it's more like 10. The main thing stopping it is the utter lack of exploration in computer science of highly parallelized/concurrent processing. So hardware-wise that means we need a DSP with perhaps a million cores with local memory (like map reduce), this would be like a video card but with general-purpose cores that can run any traditional language instead of OpenCL/CUDA etc. The software side means that we need more experience with languages like MATLAB/Octave, Elixer, Erlang, Go etc so that we can build other tools like TensorFlow on top of them from first principles.
Given those two foundations, many of the methods in neural nets, genetic algorithms, etc become one-liners and kids could play around with them the way we might have written ray casters and fractal explorers when we were young. They'll quickly discover novel ways of solving problems, combining intelligent agents in various layers, automatically assigning hyperparameters, basically all the things we struggle with today. And not long after that, there will be a repo on GitHub where you can download a brute force AGI and see how fast it can do your homework.
This seems completely inevitable to me and I would have loved to have made a career of it but now there's no time. Any idea you can think of is between 2 weeks and 2 years away from being manifested by someone on the web. Which is why I think we're looking at 5 or 10 advances over the next decade that will make AGI all but certain (barring a global recession like the dot-bomb followed by an embrace of fear like the global war on terror that set tech advances back 10-15 years, but I digress).
I'm not so familiar with such efforts myself, but that seems like an overly general statement to make. Surely there are countless examples of exploring parallel/concurrent programming and processing?
"I would have loved to have made a career of it but now there's no time"
Who knows, as you pointed out, it's becoming easier than ever for non-experts to start playing with machine learning, with access to inexpensive and powerful resources.
"..there will be a repo on GitHub where you can download a brute force AGI and see how fast it can do your homework."
Haha, yes, I can totally see that!
Important reading for these topics.
"the robotic takeover did not start at MIT, NASA, Microsoft or Ford. It started at a Burger-G restaurant in Cary, NC"
His essential points, according to the preface to Technological Slavery, are:
1) Technological progress is carrying us to inevitable disaster. … 2) Only the collapse of modern technological civilization can avert disaster. … 3) The political left is technological society’s first line of defense against revolution. … 4) What is needed is a new revolutionary movement, dedicated to the elimination of technological society. …
Of course now we have things like Bonjour and Avahi.
Well, if the system is advanced enough that its actions become incomprehensible to humans, then it's a moot point. After all, nobody seems to go around questioning free will because of earthquakes.
The dangers seem to me to be from systems that aren't anywhere as sophisticated. Those little pyscho-dogs from Black Mirror are sufficiently scary. And they are much less likely to have read Sojourner Truth than a sentient cloud that keeps us as pets.
The interviewer brought up this article and how scared he was an I told him he really should not worry about the AI part though maybe we should worry about the biological part.
He did not appreciate that.
It's weird that we could run a "Where were you when Bill Joy's article came out."
I'm not terribly concerned about the paperclip maximizer that may someday exist. I am concerned about the pay-per-click maximizer that already exists.
I see no reason that humanity is worth saving over some other, better form of life. I'm not saying its ethical to mow down innocents to make way for the next generation, but some humans are pretty fucking awful. If our descendants are robots with the intelligence of Hawking and are full of kindness and grace then why should we insist that humanity exist forever? Even if we extinguished some of our evil impulses, like genocide and rape, what makes us better? My late aunt had Down syndrome but if I could invent a vaccine that stopped it from ever happening again I would even if I never harm someone with Down syndrome.
The best counterargument I can come up with is essentially the robots asking "What else is there to do?"
In some post-singularity paradise the robots can be heading in all directions at near the speed of light, but once they've figured out this universe's physics and ended armed conflict then it stands to reason that they'd end up creating or simulating places that did not have god-like creatures.
I think we might depend on the resource constraints and conflict to give our lives meaning. Kinda a riff on Eden's "knowledge of good and evil" a creature's intelligence and wisdom may depend on a universe of consequence.
If any beings exterminated humanity or through inaction allowed it to go extinct, I'd seriously question their kindness.
Truly kind beings would bring humanity along and help us to be less destructive.
Keep breeding stock, but then you get into adjudication of value, and at that point all I can think of is Dr. Manhattan saying "the world's smartest human is no more threat to me than the world's smartest termite".
Knowledgeable nihilists are dangerous. You seem to be one.
After reading Nick Bostrom I'm worried Larry Page is similar.
This kind of thinking creeps me the fuck out.
-- Stephen Hawking
> I think we might depend on the resource constraints and conflict to give our lives meaning.