rayiner 9 days ago [-]
Is the premise correct? Almost all 1990s OS projects ultimately failed. Cairo, Taligent, Copland, BeOS. The projects that survived were 1970's/1980's technology: NeXT Step (OS X), Linux, Windows NT.
pavlov 9 days ago [-]
Don't forget Symbian, probably the last successful operating system designed from scratch.

It started life as EPOC32, the operating system for "palmtop" devices made by UK-based Psion in the mid-90s; then Nokia and Ericsson decided they needed a smartphone OS that isn't Microsoft and bought into EPOC creating Symbian sometime around 1998. It took a while for Symbian to get off the ground because the hardware wasn't there yet and the partners kept bickering, but by 2006 it was shipping on a hundred million devices and was perceived as very successful against Microsoft's Windows Mobile. A few years later, Symbian would be trounced to oblivion by iOS and Android.

The interesting thing about Symbian was that it had a rich native framework but wasn't even close to POSIX or Windows. You didn't even get the C standard library; instead the system was built on an idiosyncratic embedded-centric dialect of '90s C++. This focus on optimization at expense of programmer convenience turned out to be a total dead-end once PC-style operating systems became viable on phones — but at least it made for a distinctive development experience.

lproven 9 days ago [-]
And of course finally it went FOSS: https://github.com/SymbianSource

I'm surprised nobody has even attempted to pick it up, AFAICT. A very good OS in its day.

9 days ago [-]
EGreg 9 days ago [-]
I miss WebOS
walterbell 9 days ago [-]
It's open-source on an OpenEmbedded base, shipping in LG TVs.

http://webosose.org/develop/architecture/

biggusdickus 8 days ago [-]
Webos is just linux with specific wm.
biggusdickus 8 days ago [-]
The story of GEOS is quote interesting too. From 8bit OS for c64 in mid 80ies to powering Nokia Communicator and a bunch of random Japanese products in mid/2nd half 90ies.
blihp 9 days ago [-]
No, it's not on multiple fronts. Copland failed internally before it ever hit the market (i.e. it was never generally released.) Sure, the NeXT acquisition was the final nail in the coffin for it, but it had failed long before that. Part of the reason for the NeXT acquisition was that Copland was such a mess. It could be argued whether NeXT or Be was the better option, but Apple desperately needed to buy an OS... because Apple didn't have something that was going to ship.
kilo_bravo_3 9 days ago [-]
>It could be argued whether NeXT or Be was the better option,

I think a lot of people start mashing years together when it comes to BeOS. Apple announced the NeXT acquisition in December 1996.

In December 1996, BeOS was still a developer preview with the first "real" release still about two years away. NeXTStep was already 8 years old.

In 1996 BeOS was at a completely different state of readiness and maturity than it was with R5 (which everyone remembers fondly).

USENET is full of posts, 96-98, of individual developers announcing with triumph the porting of UNIX core utilities to the BeOS developer release versions-- utilities that were already present and mature in NeXT/Openstep.

Mac OS X Server 1.0 was released about two years after the deal was finalized and I imagine it would have taken about two years of really hard work just to make BeOS multiuser, nevermind porting all of the core utilities.

It took a couple more years (it was a somewhat jerky transition for many users) before a "usable" OS X was released but from the beginning there was an wide variety of software available for it that was easily ported from NS/OS, and features like Display Postscript, Objective-C, a mature set of APIs, and most importantly of all Project Builder were there from the beginning.

The software scene on BeOS in 1996 was... sparse.

wmf 9 days ago [-]
In 1996 Mac users were far more familiar with BeOS than NeXTSTEP. Even though it was a developer release, Be was marketing heavily to Mac users (like giving away free CDs) and BeOS could run on Macs so Mac users could try it side-by-side with MacOS and form firsthand impressions. (I was triple-booting MacOS, BeOS, and Linux on my Power Mac with a bunch of Jaz carts.) Meanwhile NeXTSTEP was expensive and ran on expensive non-Mac hardware.

BeOS was also optimized (probably over-optimized) for first impressions; the immaturity and architectural mistakes were hidden behind a facade of "OMG it's so fast and pretty". The BeOS GUI with its cartoony 32x32 icons was also a lot closer to MacOS than NeXTSTEP with it's huge high-DPI GUI, so BeOS looked like "Mac done right" while NeXT was a foreign culture.

So yeah, NeXT was better but most Mac users had no way of knowing that at the time.

npunt 9 days ago [-]
Agree with everything you said and NeXT was the right call (even sans Jobs). Apple needed mature ecosystem, libraries, etc, not just a base OS.

However, BeOS was far more stable/mature than its 'developer release' status implied. I remember how blown away people were at the dev release in 1996 by how solid it was, not just by how forward thinking its features were.

I don't think multiuser would have been a showstopper for release if BeOS was acquired by Apple. They could have put out 2-3 versions of a single user OS and few would have cared.

foobiekr 9 days ago [-]
something I have been wondering about for years, having been around: was Be actually acquisition bait from day one for Apple? I am convinced that NeXT wasn't; they just didn't build something that was designed for acquisition.

But Be, Inc. and BeOS really strike me as the kind of design, company, compromises, focus and leadership/team that I've come to associate with startups that are actually designed to be acquired by the executive's former employer after some time.

I would love to know if this is true, though in my experience, that is usually only known to the founders and executives except in the really obvious cases.

acomjean 8 days ago [-]
It was started by a former apple executive Jean-Louis Gassee [1]. At apple he said he wanted a kernel on macOS[2]. When he started BE they did develop their own hardware (BE Box). They eventually decided to port it other hardware (hardware is expensive). Microsoft had a strangle hold on most PC manufacturers and the BEbox was power PC so it made sense to try to get Mac clones to use it (There was a brief period in the 90s where apple clones existed.). I got a disk and tried it on a "Power computing" mac clone. It was kinda amazing compared to macos8. But the lack of software made it hard day to day.

[1]https://en.wikipedia.org/wiki/Jean-Louis_Gassée

[2] https://mondaynote.com/50-years-in-tech-part-11-getting-the-...

https://en.wikipedia.org/wiki/BeBox

foobiekr 8 days ago [-]
I'm aware of all that history. The part I'm asking about is whether his exit plan was Apple and when that exit plan went with the other option the company was not viable.

This is very common for "startups" founded by executives who come from a company they know is failing to execute on something critical.

Be smells like this.

Apocryphon 9 days ago [-]
One wonders, what if someone else had bought Be?
BonesJustice 9 days ago [-]
Someone did: PalmSource (the software spinoff from Palm). It was to be the basis for their Palm OS “Cobalt”, which was eventually abandoned.

Or by “someone else”, did you mean someone other than PalmSource?

Let’s be honest, though: most new OSes don’t enjoy much success or longevity. The only successful new arrival I can remember recently was Android, and it had the backing of Google, and was aimed at a relatively new market. It also borrowed heavily from established tech: Linux kernel, JDK and JVM.

I doubt there’s more than a handful companies that could have turned BeOS into something big, and of those, Apple was probably their best bet.

Apocryphon 9 days ago [-]
Someone who actually had a chance to do something with BeOS, and of sticking around for longer than a few years.
yarrel 9 days ago [-]
Minor notes:

* BeOS was evaluated by Apple as a Copland replacement (along with Solaris, NT, and NeXTSTEP).

* MacOS X discarded DPS for Quartz so it didn't turn out to be much of a feature. But very few applications touched that directly so it wasn't much of a porting issue.

* There was more UNIX and AppKit-based software available for NeXTSTEP than for BeOS but nobody in the Mac world cared about that. Quark were the only company that really blew the transition, but they blew it badly.

everybodyknows 9 days ago [-]
Preceded by Apple's late-80s failure with the "Pink" OS:

http://lowendmac.com/2014/pink-apples-first-stab-at-a-modern...

Shebanator 9 days ago [-]
It wasn't actually proceeded by Pink. Copland and Pink were going on in parallel, and the competition between them (and the original MacOS folks as well) was intense. Pink was the first to lose that battle, and got spun out as a result.

Source: I worked on Pink/Taligent for six years. Prior to that I supported A/UX and MPW in MacDTS.

foobiekr 9 days ago [-]
The most amazing story of the 90s was the incredible ball of failure that was Taligent. It's almost impossible to even describe to people the situation around it and get them to understand how crazy the world went for a time. Taligent (and OO, to an extent) was the industry's equivalent to the satanic ritual abuse panic - it dominated the conversation for a brief time but was then solidly forgotten, no one really mentions it, and when you try to tell people born later they don't really believe you.
Shebanator 9 days ago [-]
I agree that the OO hype at the time was crazy, and I was extremely guilty of that mania at the time. But from my POV (as a Pink/Taligent employee) the failures of Taligent had little to do with that. It was much more a problem of constantly moving goalposts (we're an OS! Now we're an OS and a layer on top of AIX! Now we're going to be an optional window manager for AIX!) and vicious politics.
foobiekr 9 days ago [-]
I think that, if we're talking about Taligent itself, that's probably true. I've talked to other Taligenters and they say the last bit in particular.

But the books, the books were straight up crazy and people reacted to them in the same unfortunate way that they reacted to "Design Patterns."

Another of the crazy OS companies that deserves mention on the list of failures is both the Newton and Magic Cap.

Shebanator 8 days ago [-]
Also, even though Taligent was a monumental disaster, it wasn't all bad. There is a lot of stuff you use every day that came directly out of Taligent's work. For instance, the i18n libraries for Java and ICU (http://site.icu-project.org/design/cpp) were early important work. A lot of unicode came from Taligent's work (but not all of it), and the President of the Unicode Consortium, Mark Davis, did a lot of the formative work at Taligent. So at the end of the day you can thank (or blame) Taligent for emojis.

Taligent also had quite a bit of influence on the field of unit testing, which I'm proud to have had a hand in. I've written about this in the past: https://shebanator.com/2007/08/21/a-brief-history-of-test-fr...

foobiekr 8 days ago [-]
It is true that many good things come out of failures. There's a lot of value in trying and _not_ succeeding.

I'm a bit of a student of failure. If we want to really dive into deep failure, someone mentioned Workplace OS, which was not just a terrible project but a terrible idea (in the same way that NT's original concept of multiple-OS-personalities was wrt: OS/2 16 bit, for example, and POSIX, just taken to an entirely more crazy level).

For truly fun crazy, one has to step away from OS projects to things like graphics (for example, Fahrenheit and Talisman).

Shebanator 8 days ago [-]
I wrote one of those books, specifically "The Power of Frameworks". Like I said, I was definitely deep in the hype cycle at the time.

https://www.amazon.com/Power-Frameworks-Windows-Os/dp/020148...

parasubvert 9 days ago [-]
Taligent and OO frameworks in general had a massive impact of hype/unreality across the industry that spread to other other desktop or networked OO technologies, such as OpenDoc, CORBA, and WS-*. COM/DCOM/COM+ and OLE got caught in this and mostly succeeded in its niche but remained terrible complicated.
atombender 9 days ago [-]
There was also IBM's SOM (System Object Model), used extensively in OS/2. SOM/DSOM was inspired by CORBA, just like COM. All of them uses an IDL and are based on querying for supported interfaces, which is a very powerful mechanism.

They all died except COM, which remains the basis for most APIs that Microsoft release these days (even after a brief stumble with .NET where they, if I remember correctly, they tried to kill COM). It's a great technology, and it's a shame that Microsoft didn't open it up and turn it into a truly cross-platform technology.

What didn't take over the world was this notion of object-oriented documents, which was what OpenDoc and Taligent were all about. This idea that content came with behaviour; you could embed an object from one app into another, e.g. a piece of an Excel table into a Word document, and the table would be live-updateable within Word, with all your interactions basically going between processes as COM calls, with the OO behaviour following the embedded content as it moved around, even when copy-pasted between documents. Very powerful, but super brittle. I used embedding a lot in the 1990s, trying to achieve what the PR told me should be feasible and easy, but it invariably ended with app crashes.

parasubvert 7 days ago [-]
It turns out that hyperlinking / embedding shouldn’t involve you giving your memory address space to someone else - who knew :)

And really, the web/REST wound up being the object oriented document framework we were all looking for. It was terribly inefficient at first (and is still) but was architecturally simpler. The main issue is no one thought it would be possible to replace Windows with a cross platform GUI, and no one thought hypertext/hypermedia - a mostly academic concept at the time beyond HyperCard - would be that GUI.

https://www.slideshare.net/StuC/oopsla-2007-the-web-distribu...

atombender 7 days ago [-]
Right — as I remember, if you embedded an Excel object into Word, the Excel part was implemented in a DLL that loaded directly into the Word process space. This would have been so much more stable if it instead started a headless Excel server process. With DCOM, this kind of cross-process COM worked great, but that came much later. OLE 1.0 preceded COM by several years.
p2t2p 8 days ago [-]
They didn’t die. CFPlugin in Core Foundation that runs on every iPhone and Mac is an implementation of COM
atombender 8 days ago [-]
"They all died except COM".

And I would argue they CFPlugin is only really inspired by COM, it's not the real thing. It's just IUnknown and the same class layout:

> The CFPlugIn model is compatible with the basics of Microsoft's COM architecture. What this means is that CFPlugIn Interfaces are laid out according to the COM guidelines and that all Interfaces must inherit from COM's IUnknown Interface. These are the only things that CFPlugIn shares with COM. Other COM concepts such as the IClassFactory Interface, aggregation, out-of-process servers, the Windows registry, etc... are not mapped.

http://mirror.informatimago.com/next/developer.apple.com/doc...

Shebanator 9 days ago [-]
It isn't fair to blame Taligent for that hype. Several of these systems existed before Pink/Taligent project was ever discussed publicly. Heck, we used to talk about how our stuff was better than COM and CORBA all the time.

And OpenDoc was developed more or less simultaneously with Pink IIRC. But its been a long time and I never worked on OpenDoc so I might have the timing off a bit on this one.

georgeecollins 9 days ago [-]
That is a great metaphor for how important things like Taligent seemed at the time, which is hard to imagine now. Does anyone remember the Newton OS? It was very strange and still kind of visionary. There were all these bold ideas for operating systems and we kind of settled on Unix with a GUI layer or VAX with a GUI layer. That was good enough.
glhaynes 9 days ago [-]
Add one onto the early-'90s failure list: IBM Workplace OS. Rarely mentioned these days for some reason, but a hugely hyped and super-expensive complete failure.

From the Wikipedia page https://en.wikipedia.org/wiki/Workplace_OS: A University of California case study described the Workplace OS project as "one of the most significant operating systems software investments of all time" and "one of the largest operating system failures in modern times".

danans 9 days ago [-]
Linux was started in 1991, so it really belongs in the BeOS era. But Linux isn't an apples-to-apples comparison. It's success followed the rise of the internet, and the resulting huge demand for a free server OS. No commercial vendors were providing that.

The other OS's you mentioned are all Desktop OS's, and that market was already buttoned up by the 90s, as it largely still is today. They were up against much steeper odds.

51lver 9 days ago [-]
The OS part, the GNU userland, was written before the 90s. The kernel was just the missing piece that made it all work on a 386 desktop that were rising in popularity at the time.
kbutler 9 days ago [-]
It's interesting how the gnu project narrative of "we did all the work, the kernel was 'just the missing piece'" has propagated.

The gnu project worked on a kernel - Hurd. It has been "in active development" since 1990. It's "just the missing piece."

philwelch 9 days ago [-]
Hurd was a microkernel that was a genuine attempt at something new. Linux’s monolithic design was very safe, very 70’s/80’s, and was criticized by everyone for not being a serious 90’s microkernel design.
kbutler 8 days ago [-]
This was covered extensively over 25 years ago, including the classic comment by Linus that "In fact the /whole/ linux kernel is much smaller than the 386-dependent things in [the existing microkernel] mach". https://www.oreilly.com/openbook/opensources/book/appa.html

Mach and Hurd were existing microkernels/projects before Linux development began.

Linux dominated because it (partly) worked and was accessible and a community formed around it. Hurd had none of those. It's the canonical proof of Richard Gabriel's prescient "Worse is Better (1991)" http://dreamsongs.com/WIB.html - "The good news is that in 1995 we will have a good operating system and programming language; the bad news is that they will be Unix and C++."

peapicker 9 days ago [-]
NeXT mach was a microkernel as well based on CMU’s Mach 2.5 (in the first release) and was in the market in 1988. It was actually fantastic, even when I was running 0.7 of the OS in prerelease at LANL. Hurd started in 1990 so it wasn’t NEW new imho (originally based on Mach 3.0 although NeXT went there too). There are other ideological differences but the core mach microkernel ideology started at CMU in 1985 making microkernels an 80s technology as well.
mikekchar 8 days ago [-]
I'm really stretching to remember, but I recall that initially the Hurd was planning to have a completely distributed user system (i.e., file handles were essentially URLs, they had a system for dealing with users distributed in the network, etc). They were going to use the Mach kernel message passing in really innovative ways and I seem to recall that they had in mind some fairly radical ideas for distributed processing. I don't think they ever really realized any of that (I haven't actually checked what they ended up doing). My impression was that the project was dogged with a lot of problems, some technical and some not. Had they succeeded, it would have been a really amazing system, I think. One of these days I keep thinking I should check out what they actually did.
mikekchar 8 days ago [-]
Let's be fair to both sides here. The new way of working that Linus brought about absolutely crushed Hurd's "We have our team of experts and we don't want any help". It wasn't so much that the Hurd was destined to fail to deliver, it's more that the success of Linux made it completely irrelevant at a pace that nobody would have expected (not even Linus). I mean they even beat the law suits to free up BSD.

This is one of the few things my feeble ageing memory recalls fairly clearly :-) I desperately wanted to work on the Hurd, but they rejected me because I was a noob. Then I was desperately waiting for BSD to pass through the legal gauntlet. Then a colleague said, "What about Linux? They just got X working. It seems pretty good". It really came out of nowhere -- not necessarily due to good development (thought there was lots of that), but rather because key early contributors were encouraged and enabled to participate by Linus. We were all desperately waiting and many talented people just leaped at the opportunity to do something (not me, unfortunately -- I got sidetracked with work :-P).

In comparison to the rest of GNU, it was just a piece. A pretty big piece, but just a piece none-the-less. Keep in mind the amount of work that went into GCC and glibc, which at the time were comparable projects. Without those pieces, Linus would have nowhere to start. Then all the user land tools -- again, without those I don't think anybody would have joined in. They would have waited for BSD. And those user land tools were really good. The first thing I did when I installed a new Unix box was install GNU. GNU was important in Unix land before anyone had ever heard of Linux. Sometimes I think younger people have no real concept about how much code was part of GNU. The goal of GNU was to give you a fully functioning POSIX system. One of the reasons we even have POSIX is because of the work of the guys working on GNU.

When we say that it's Linux with GNU, that's really to distinguish it from, say, Linux with Android. Or Linux with BSD (if anyone does that). I'm not actually sure how much GNU is regularly used in a Linux distro these days, but I still prefer it to alternatives (maybe I'm just old). Again, we're really talking about having a full POSIX compliant system and the kernel is just a part of that. Linux is super fantastic and I actually choose Linux over other possible kernels because of how good it is. But I'm never going to run Android on my desktop, no matter that it runs a Linux kernel. I'd rather not run Android on my phone, if I had any choice in the matter!

To be even more fair, I always thought we should have given X a bit more air time. Especially these days, it's important to me if I've got X or Wayland running. But it was always a bit daft to think that people were going to be saying GNU/Linux. It's even more daft to think that people would say GNU/X/Linux or GNU/Wayland/Linux. It really doeasn't roll off the tongue ;-)

derekp7 9 days ago [-]
I think a big part of it was the community that formed around the Linux brand. I have a feeling that it could have been Minix, if it was properly open source at the time. But a lot of people in the community were promoting Linux as an alternative to Dos/Windows (based on my recollection of Usenet and BBS postings at the time), whereas GNU, BSD were pushed as an alternative to Unix. And Minix's aim was educational, although Minix 386 (a set of community patches to the system) was more aimed to make Minix production capable.
maxxxxx 9 days ago [-]
"I think a big part of it was the community that formed around the Linux brand. "

People are criticizing Linus for lack of social skills but somehow he managed to start a big and loyal community. That's not a small feat.

adrianratnapala 9 days ago [-]
More to the point, Linux was going out of it's way to retread technology from the '70s. This is shown by the fact that it had ready-made userland in the form of GNU components that had been built to be Unix compatible.
wvenable 9 days ago [-]
Linux is the just the kernel for an 80's based OS.
tinus_hn 8 days ago [-]
Linux is a kernel though, not an operating system. I don’t care at all for the GNU/Linux debate but it is of course important not to take the start of one part as the start of the whole thing.
EamonnMR 9 days ago [-]
It was built on solid foundations however.
based2 9 days ago [-]
1993: FreeBSD
51lver 9 days ago [-]
Fork of older upstream project which was a collection of mods for an even older upstream project. Not 90s.
aasasd 9 days ago [-]
I'd just like to interject for a moment...
dfabulich 9 days ago [-]
If we're looking at kernels, there have clearly been substantially rewritten kernels that launched in the 90s; they succeeded by maintaining backwards compatibility with userland software written in the 70s/80s.

If we're looking at userland, the only successful general-purpose operating systems in the last 30 years that started from scratch with no apps at all are iOS and Android. (Even there, arguably their killer app was backwards compatibility with the desktop web.)

All others, including Windows, OS X, and Linux, had solid backwards compatibility facilities supporting software written for DOS, System 9, and Unix, respectively.

maxxxxx 9 days ago [-]
Tend to agree. All the OS attempts that tried to seriously innovate the desktop failed. Add WinFS and OpenDoc to that list.
reaperducer 9 days ago [-]
And OS/2.
maxxxxx 9 days ago [-]
It's kind of sad that we are so stuck in the current paradigms. The 90s were definitely much more exciting in terms of innovation.
nostrademons 9 days ago [-]
It's because the only innovations that "count" in the mainstream marketplace are the ones that take us from "not good enough" to "good enough", not those that take us from "good enough" to "excellent". In other words - as a startup, you get customers by taking a non-consumer and turning them into a consumer. It's very hard to take a consumer and turn them into a consumer of something else.

In the 70s we went from "not good enough" to "good enough" in price, but once we got to the PC clone wars of the mid-80s it was hard to go much lower. Then in the 80s and early 90s we went from "not good enough" to "good enough" in user experience, with MacOS 7 and Win 95. The 90s OSes were all attempting to take a "good enough" user experience and make it excellent, and that's where they failed - most mainstream consumers don't care enough about excellence to make it worth learning a new OS. Instead the late 90s and early 00s took us from "not good enough" to "good enough" in information, with a big cost in user experience. The web sucked as a UI and still does, but it opened up literally billions of sites worth of content that a desktop user could only dream about. Now the web has created this whole new problem of trust, which cryptocurrencies solve, but at the cost of regressing 30 years in performance and usability.

Someone1234 9 days ago [-]
But one could ask what is the current paradigm?

For example, you can view it as the desktop stopped evolving, or that we moved to the browser as a virtual machine hypervisor running whatever environment you choose (particularly true with WebAssembly).

The 90s were exciting because the desktop was seen as the center of the computing universe; these days the browser is the center and the desktop is a "me too!" paradigm.

One could argue we got "stuck" or one could argue that evolution shifted to an internet first paradigm with more security in mind.

bluGill 9 days ago [-]
I prefer the '90s overall. There are good points to the modern web/internet world, but given a choice the '90s design is a better paradigm. Of course this assumes that one does their respective paradigm well, there were a lot of bad designs in the '90s that are worse than today's equivalent. However I contend that if effort had been continued in the '90s direction the result would be better.

Again, there are some major points in favor of the current internet/web world. For any "program" which you will use rarely it isn't the worth the cost to install the program.

ghaff 9 days ago [-]
We have two dominant mainstream paradigms.

We have the browser. And we have the app store.

Arguably, for better or worse, the desktop shifts to a combination of these and probably converges with mobile for mainstream users.

51lver 9 days ago [-]
No it wasn't. The 90s was just intel and microsoft eating the market and killing innovation from competitors. Wintel easily set us back 20 years.
michaelcampbell 9 days ago [-]
Indeed. Looks aside (although I liked them too), OS/2 was such an unbelievably good user OS. I wasn't programming enough at the time to know if it had a good developer story or not, although I kind of dug REXX.
chiph 9 days ago [-]
Workplace Shell was pretty innovative, and the only firm I know of that took advantage of it was Stardock.

https://en.wikipedia.org/wiki/Workplace_Shell

https://www.stardock.com/stardock/articles/article_sdos2.htm...

acomjean 9 days ago [-]
and its follow up OS/2 Warp.

Which I believe is the only operating system advertised in the Super Bowl.

IBM's windows replacement wasn't so bad in my very limited usage of it. (I used it as the only scanner drivers we had for a scanner at IBM was for a OS/2 warp machine...)

ghaff 9 days ago [-]
The premise is a stretch. Arguably, general purpose computing today is either nix or VMS-derived--and I'm not even sure those are are sufficiently different in underlying model to be completely separate operating system trees.

That said, I can buy Windows NT as sufficiently distinct to be a unique 1990s OS. But Linux is clearly a *nix and OS X is as well--UI and integration notwithstanding. Sure, you can pick NeXT Step and Linux and go "90s!" but they're clearly part of a much earlier tree.

And, if you bring in mobile, Android clearly derives from Linux. I don't know enough about iOS internals to identify where it sits in the OS tree.

dfabulich 9 days ago [-]
iOS derives from OS X.
lproven 9 days ago [-]
Indeed so.

And NeXTstep isn't a 1990s OS. It's a 1980s OS. v0.8 first demoed 12 October 1988 when the NeXT cube was launched; v1.0 shipped 18 September 1989.

NeXTstep influenced the Windows 95 UI in some ways -- the shaded 3D look, the idea of a fixed panel across the screen which could be both an app launcher and an app switcher.

But NeXT probably got that from Acorn RISC OS, which was shipping before NeXTstep was ever shown.

I've written about that, too: https://www.theregister.co.uk/Print/2013/06/03/thank_microso...

jonhendry18 9 days ago [-]
It also doesn't really address the question, why Copland failed, which likely had a lot to do with Apple's management culture rather than the technology they were attempting to build.
projektfu 9 days ago [-]
In this vein, I think they tried to do too much at once. They should have learned their lesson with the PowerPC. They had two tracks, one was trying to implement a new OS. The other implemented the 68k OS on a simulator. The first track was way late. If Copland could have released just a protected environment at first, and allowed people to start writing for it, they could have added all the rest later.
orangecat 9 days ago [-]
If you look at the Copland technical docs, they describe an OS with modern underpinnings but running the Mac UI as a single process on top of it (https://en.wikipedia.org/wiki/Copland_(operating_system)#Cop...). Developers were supposed to put as much functionality as they could into background processes/threads that would benefit from preemptive multitasking and protected memory.

Amusingly, classic Mac OS ended up sort of close to that if you squint. It had a "nanokernel" that ran tasks preemptively and even supported multiple CPUs, but like with Copland the entire UI ran as a single "blue" task. The main difference from Copland as far as I can tell is that non-UI tasks were heavily restricted in the OS APIs that they could call; for unsupported APIs they had to send a message to the blue task and wait for a response. More details at http://mirror.informatimago.com/next/developer.apple.com/tec....

giobox 9 days ago [-]
Agreed, the premise here is in my opinion pretty wrong. While it too would be a little hyperbolic, I’d be much quicker to argue the opposite.

Also, how successful other OSes were or weren’t at attracting an install base during the 1990s is very tangentially related to why Copland failed.

Slightly related: I’ve always wondered what Apple might be like today if Gassée had succeeded in selling BeOS to Apple instead of Jobs with NeXTSTEP. Probably a nice headstone in the Silicon Valley graveyard next to Sun, SGI etc, but interesting to consider.

smarks 9 days ago [-]
Yes. To paraphrase Ray Noorda, it'd still be dead, but it would have been a much more interesting death.
peatmoss 9 days ago [-]
Oh my goodness, just think of the eulogizing of NeXT that we’d be doing here on Hacker News.

But who knows, maybe in that alternate timeline Sun succeeds in making a Solaris for the masses, and Linux becomes an obscure footnote. Some flavor of BSD ultimately becomes the oddball alternative to Windows (which dominates) and Solaris which is the runner-up developer favorite.

scarface74 8 days ago [-]
Dead.

Apple almost died the first time mostly because Gassée went for margins instead of market share when he was at Apple.

Apocryphon 9 days ago [-]
And then BlackBerry or Microsoft CE ends up launching the mobile revolution. Oh dear.
wmf 9 days ago [-]
Nah, those were available for years and never reached anything close to iPhone/Android levels of adoption. It's possible that Android (the "better Danger" version, not the "iPhone clone" version) would have still happened, but it may also have languished without the iPhone to pave the way.
lproven 9 days ago [-]
Agreed.

The original pre-Google Android was a Blackberry-killer, and I don't reckon that was a good way to go.

You can see some early prototype devices here: https://www.androidcentral.com/android-pre-history

As usual for Ars, it has a far more detailed history, with pre-1.0 screenshots: https://arstechnica.com/gadgets/2016/10/building-android-a-4...

You can see that, before the iPhone, it started out as a Blackberry clone, or something like it. And that, IMHO, was far to geeky a toy to change the world as the iPhone did.

peatmoss 9 days ago [-]
On the other hand, maybe we’d all have physical keyboards now instead of poking at glass...
tinus_hn 8 days ago [-]
I remember being so enthusiastic about Android development and then the iPhone came along and was just 10 years ahead.
sodosopa 9 days ago [-]
OS/2 as well. As soon as Microsoft split from IBM on OS/2 dev, NT progressed well. So, I wouldn’t call NT a failure as it continued on as the well liked Windows 2000 and parts were in XP.
EamonnMR 9 days ago [-]
NT didn't fail, it's still going in the form of Win10. But it's very much built along the lines of VMS. Which was the observation in the parent - the radical experiments failed and the Unix/VMS OSs survived.
laumars 9 days ago [-]
BeOS was largely POSIX compliant and that still failed.

I think the answer to why some platforms failed and others didn’t is a much simpler one: partly due to the aggressive business tactics from their CEO and partly due to luck.

Linux is an outlier there but I think you can substitute the CEO effect with GPL however the luck element is still relevant.

If there is one thing the history of computing teaches us, it’s that a better product doesn’t mean a more successful one.

lproven 9 days ago [-]
Firstly, I think I have to say 'define "failed".' The company is sadly dead and gone, swallowed by PalmSource. BeOS tech was used to build the ARM-native, multitasking, media-savvy PalmOS 6 "Cobalt" -- which sadly never shipped on any devices. PalmSource was then in turn swallowed by Access Corp of Japan.

These guys: https://en.wikipedia.org/wiki/Access_(company)

Still alive, to quote GladOS. It provides the Kindle web browser, for instance.

Second, BeOS has been reborn as Haiku, which is also very much alive, and includes what Be code it legally can (the Tracker, mainly).

Haiku is IMHO the most complete/most interesting desktop FOSS out there. Haiku is now self-hosting and recently entered beta, after a long gestation. There's very little manpower behind it, so progress is slow, but it is moving.

So, significant influence, I think it's fair to say. BeOS shipped, it sold, I reviewed v5 and it remains my favourite x86 OS ever written. (Yeah, I'm biased. Sue me.)

laumars 9 days ago [-]
I think you’re forcing your definition of “successful” a little. I’m not going to disagree with you that BeOS was awesome; it still remains one of my all time favourite desktop OSs. But regardless it wasn’t commercially successful and thus isn’t sold to consumers any longer.

Haiku is an interesting one. I’d put it in the same category as ReactOS. It’s successful in the sense that it’s an open source project that’s under active development. However they’re still essentially just hobbiest platforms so I wouldn’t even rank them successful when compared to Linux on the desktop (eg Ubuntu, Fedora, etc) let alone successful compared to commercial platforms like Windows nor OS X.

It really is a great shame BeOS wasnt more successful though, it was an amazing platform (even without framing it in the context of the shit that was around at the time: Windows 9x and Mac OS 9). Sadly for Haiku, desktop computing has moved on and I just don’t think there’s any need for a classic BeOS desktop any more.

mr_toad 8 days ago [-]
> Linux is an outlier

Linux was long the go-to OS for cheap servers running on cheap hardware.

When the dominant computing paradigm shifted to large arrays of cheap boxes (map-reduce -> Hadoop -> cloud) Linux was in the right place at the right time.

laumars 8 days ago [-]
Linux was already pretty big by the time that had happened. Plus it was BSD that was originally favoured as the cheap goto OS by ISPs (before Linux really took off).

Without wanting to start a flamewar, I honestly think it was the GPL licencing that made Linux what it is. BSD was more mature, arguably better designed and was already in use and battle tested. But GPL forced collaboration a little more where and I think that really appealed to hackers.

(I’m not arguing that GPL is better nor worse than BSD/MIT/whatever. I have no strong allegiances with either side of the camp)

duxup 9 days ago [-]
Yeah OS/2 was my first thought as well, that had a lot of support from IBM, albeit the folks running that show I don't think understood the market, users, Microsoft.
davesque 9 days ago [-]
This was my first thought. I think the premise is founded in a kind of survivorship bias which is worsened by the fog of time.
tinus_hn 8 days ago [-]
Almost everything fails, you just remember the few that succeeded. There were thousands of Linux distributions, thousands of movies, millions of songs. Most of them failed and were forgotten.
blattimwind 9 days ago [-]
Notably practically all 90s RISC operating systems were gone by the time the dot-com bubble burst.
jonhendry18 9 days ago [-]
Not really.

Some were ported to Intel. Others (HP/UX) became dedicated big iron operating systems. HP/UX and Solaris are still being updated and used.

2trill2spill 9 days ago [-]
Solaris is gone[1]. Illumos a fork of OpenSolaris still lives on however.

[1]: https://www.theregister.co.uk/2017/09/04/oracle_layoffs_sola...

skissane 9 days ago [-]
> Solaris is gone

Solaris isn't dead. Oracle released version 11.4 in August this year: https://blogs.oracle.com/solaris/oracle-solaris-114-released...

The article you linked to doesn't say Solaris is gone either. It mentions large numbers of layoffs, but acknowledges that there is still a Solaris dev team in place (even if a significantly smaller one).

(Disclosure: Former Oracle employee, although I never worked on Solaris.)

jiveturkey 9 days ago [-]
i agree. there is a selective bias going on. like businesses, most attempts fail.
katuskoti 9 days ago [-]
If anyone has watched the very tech-oriented anime "Serial Experiments Lain", Copland OS is used by the protagonist, Lain. I happen to love the anime so I set up my neofetch terminal image (an actual PNG, with w3m) to the logo of Copland OS used in the show.
bluejekyll 9 days ago [-]
> A/UX was very impressive for its time — 1988, before Windows 3.0. It could run both Unix apps and classic MacOS ones, and put a friendly face on Unix, which was pretty ugly in the late 1980s and early 1990s.

I realize I really don't know much about A/UX at all. I didn't even know it could run classic MacOS apps. Does anyone have a link to more about the OS? I always assumed it was just a clone of System V, but if it could run classic MacOS apps, that meant it was more than just that.

Taniwha 9 days ago [-]
I worked on the original port (did about half the kernel stuff) - it was our standard SystemV with berkeley sockets, and wrote some of Apple specific stuff (kernel event manager, appletalk etc) - one thing I was particularly proud of was on the fly loadable unix device drivers, somewhat ahead of their time - feel free to ask me questions

The Mac compatibility stuff (done at Apple) essentially ran in a single unix thread - really a VM for the mac OS7 world - it ran in user mode (mac OS apps usually ran in kernel mode) and emulated exceptions

jamesfmilne 9 days ago [-]
Hi Taniwha

I’m working on a new Ethernet card for the Mac SE/30, and I’d love to be able to get if working with A/UX 3.

https://www.mactothefuture.org/

If you had any pointers on how one could write an Ethernet driver for A/UX I’d be very appreciative! :-)

Taniwha 9 days ago [-]
oooh .... sadly I haven't had access to kernel source for maybe 30 years, it's been a long time essentially you need to write a BSD style networking driver (of the era, so likely 4.1) along with interrupts/etc I think without source it's going to be really difficult - on the other hand you can use autoconfig to load it into an old kernel, hook up the interrupts and set it running
jamesfmilne 8 days ago [-]
Cheers. I'll see what I can dig out of the fossil record about autoconfig ;-)
Taniwha 7 days ago [-]
autoconfig is essentially a front end like the linux module loading tools, it makes some fake COFF files containing the kernel symbol table and some glue code and links them against the driver(s) you want to load, then it loads the code into kernel space, patches the block/char major tables and calls a driver's init routine

Sadly it's not as functional as we would like, I actually wrote it for UniSoft as a way we could sell drivers without building kernels every time, someone at Apple saw it and not only demanded we include it in A/UX but also demanded that they own it .... so I wrote another one for Apple, it worked differently and was barely functional - Apple could have had the original better one for free if they'd dropped the demand that they owned it

npunt 9 days ago [-]
Been watching this project for a while (via 68kmla), haven't seen any updates recently but excited to about any progress you make!
jamesfmilne 8 days ago [-]
Yeah, sorry about that! Real life got in the way a bit this year.

I've got the card sending & receiving packets properly now, but having a few issues with the CPLD, making the the machine crash sometimes accessing the card. Hopefully once that's fixed I can make a rev2 of the board and release some schematics & drivers.

npunt 7 days ago [-]
Nice to hear you're making progress. Given these are 30 years old, what's a few months here or there, right? :) Look forward to seeing v2.
bluejekyll 9 days ago [-]
I guess the only question I have, is do you have an opinion on why this OS wasn’t chosen as the means of modernizing MacOS, rather than the Copeland effort? It sounds like you had a huge headstart on what turned out to be a similar strategy with virtualized Mac OS 9 on Mac OS X.

I could see it being a case of wanting something shiny and new. Thanks for your response!

Taniwha 9 days ago [-]
I didn't work for Apple, I worked for UniSoft who did the Unix port to the Mac2 (we got half of the original batch of Mac 2 proto boards, I got to debug [and fix] the hardware).

After we handed it over the group who did the UI work were pretty small within Apple - I think it was mostly politics, my view of Apple in those days (and the few years after) was that everything was politics, I remember the firewire guys coming around and shilling for supportive developer comments to try and keep their project afloat at one point. I'd guess that 2/3 of every cool thing designed at Apple got shelved, people who had poured several years of their lives would walk.

A/UX died more slowly, switching to the PPC killed it, Apple decided not to do a Unix port (we probably could have done one faster than getting the MacOS working)

bluejekyll 8 days ago [-]
excellent, thanks for the insight.
classichasclass 9 days ago [-]
It can run some. The Finder is basically System 7.0, with all that implies. Some CDEVs and INITs will work in it, and some others will not only not work but (from personal experience) make it impossible to bring up the Mac side. Some Mac apps will balk at what's missing from the OS or are used to playing fast and loose in ways A/UX will prohibit.

That said, it is remarkable how the two sides work together. It's not a perfect union because X apps still have to come up in a dedicated X server which runs on the Mac side, so even graphical apps don't come together seamlessly. But even within its limitations it presents a compelling illusion of a unified whole and Commando is a great way to discover command line options.

Taniwha 9 days ago [-]
most of the really bad issues were things that were broken in OS7 running VM too - mostly it was people messing with the upper 8 bits of pointers directly rathyer than using the standard handle APIs, but expecting to be able to write directly to hardware devices or 'knowing' about undocumented stuff inside the ROMs was also an issue
salgernon 9 days ago [-]
There's an emulator that was written to run just a/ux:

https://github.com/pruten/shoebill

You need to track down roms and install media.

ghaff 9 days ago [-]
The Wikipedia article [1] is reasonably thorough. It was a licensed System V with some BSD features (and Apple UI, etc.) added. (Most of the Unixes of that era mixed System V and BSD together to various degrees.) According to the linked article, it ran a few MacOS apps but not many.

I actually have an A/UX coffee mug on my shelf complete with a phone number to call for more information (no URL or email :-))

[1] https://en.wikipedia.org/wiki/A/UX

protomyth 9 days ago [-]
It was more than a System V licensee, but it only ran on the 68K line of Macs, so it really didn't have a chance and was dead before Copeland.
lproven 9 days ago [-]
Hi. Author of the piece here.

It's a repurposed Quora answer; the original question might clarify what I was answering: https://www.quora.com/Why-was-Apple-unable-to-complete-Copla...

walrus01 9 days ago [-]
Anyone who ever used MacOS X v10.0 (or the server preview of it) around the year 2000/2001, and had previously used a NeXT from the command line, could immediately tell that it was the NeXT OS with a MacOS-resembling veneer of GUI on top of it.
dep_b 9 days ago [-]
What's more impressive is that Mojave would be really familiar to those users from the year 2000. If I compare it to the mind-blowing UX changes Windows went through in the same time...and none of them were never really all compassing or guaranteed to stick around for a time.
aasasd 9 days ago [-]
Well, idk about Win10, but in 8 Microsoft obviously thought a UI should be made out of tables, just as they did in '95.
blattimwind 9 days ago [-]
> and none of them were never really all compassing or guaranteed to stick around for a time.

I recently learned that the ribbon used in Office is proprietary to Office itself and wasn't/isn't used in other products - not even in other Microsoft products, which use a different ribbon, which even behaves differently.

aasasd 9 days ago [-]
Ooh, that's juicy! Could you, by any chance, please share a link to read up on details?
dep_b 8 days ago [-]
I've read that the Office division basically does everything separately form the Windows division because they want total control. Everything. Frameworks, tooling, you name it.
npunt 9 days ago [-]
Yep, and meanwhile Mojave would be really familiar to an original Macintosh user from 35 years ago. So much has changed yet so little.
duskwuff 9 days ago [-]
I can't find the screenshots offhand, but someone recently pointed out that the wording of one Finder alert dialog ("The document $1 can't be opened because it is in the Trash", IIRC) hasn't changed since System 7.
Aloha 9 days ago [-]
I dunno - the umm, finder interaction can be significant different
robterrell 9 days ago [-]
Rhapsody! I have the box and CD installer somewhere. It was 100% NeXT with the classic Mac OS gui chrome bitmaps. A vertiginous uncanny valley.
amyjess 9 days ago [-]
> It was 100% NeXT with the classic Mac OS gui chrome bitmaps. A vertiginous uncanny valley.

From Nathan Lineback's screenshot gallery:

http://toastytech.com/guis/rhap.html

http://toastytech.com/guis/osxsv.html

zapzupnz 9 days ago [-]
Yellow Box on Windows was even weirder, since it had three themes: Windows, OPENSTEP, and Mac OS. The Mac OS theme was faithful in every detail except font rendering, looked really bizarre.
marssaxman 9 days ago [-]
It wasn't even a particularly convincing resemblance; remember when the Apple logo was in the middle of the menu bar for a while?
malvosenior 9 days ago [-]
> It’s often said that Apple didn’t take over NeXT, nor did it merge with NeXT — in many important ways, NeXT took over Apple.

This is pretty much the answer. NeXT was Jobs' baby and he was happy to deploy all the tech through Apple when he came back. It worked out really well for them. Copland dev was also lagging and pre-Jobs (return) Apple had a decided lack of ability to ship.

webwielder2 9 days ago [-]
NeXT was an unintentional Apple skunkworks.
microtherion 9 days ago [-]
The NeXT management discarded Copland, most Apple technologies — OpenDoc, OpenTransport, GameSprockets, basically everything except QuickTime. [...] It took the existing MacOS classic APIs [...] and cut out everything that wouldn’t work on a clean, modern, memory-managed, multitasking OS.

I've never seen the innards of the above technologies, but to the extent that this passage gives the impression that the technologies that were cut (and one could add QuickDraw 3D and QuickDraw GX to the list) were the least modern and future proof, I think that's exactly backward. It's largely the most modern technologies that were cut, and it's the crufty ancient APIs that made it into Carbon.

Something like OpenDoc would probably have been reasonably portable, given that it was based on IBM technologies. OpenTransport was based on System V streams, GameSprockets was based on a QuickTime stack which largely survived for some time.

Presumably those decisions were made because the new APIs, gorgeous as they were, didn't have major adoption yet, and Apple desperately needed to focus.

acqq 9 days ago [-]
Some chronology:

An end of 1995 article:

https://web.archive.org/web/20070610194914/http://www.busine...

"APPLE'S COPLAND: NEW! IMPROVED! NOT HERE YET!"

"Says one recently departed Apple engineer: ``There's no way in hell Copland ships next year. I just hope it ships in 1997.''"

One year later:

https://www.cultofmac.com/459054/apple-buys-next/

"December 20, 1996: Apple Computer buys NeXT, the computer company Steve Jobs founded after leaving Cupertino a decade earlier."

A little more than two years after that, already 1999:

https://en.wikipedia.org/wiki/Mac_OS_X_Server_1.0

"Mac OS X Server 1.0, released on March 16, 1999,[1] is the first operating system released into the retail market by Apple Computer based on NeXT technology."

stewbrew 9 days ago [-]
Did it actually "fail" or was it discarded? As a Mac user from back then, I remember copland only from some articles and from macos gadgets that made the classic macos somehow look like copland, i.e. like a weird teeny disco box. It could be that there were a few month were copland was actually released to the wild, but at that time I already ran suse linux.
classichasclass 9 days ago [-]
Copland was a bundle of fail. It would crash doing nothing at all and couldn't run anything of substance. But, typical of Apple's death spiral at the time, it took Ellen Hancock saying a strong, unequivocal "kill it" to get Amelio to do so. I bet Apple would have continued to iterate on it to their doom if that hadn't happened.
plushpuffin 9 days ago [-]
Came here to post this, and upvoted you instead. Everything I've read about it indicated that Copland never worked and was an unstable POS that was impossible to develop for.

At one of the Apple developer conferences, people booed and criticized the new OS's unimpressive capabilities during a demo/slideshow, prompting Amelio to come back on stage and promise to "tack on" symmetric multiprocessing. For an OS that was supposedly mere months away from release...

rodgerd 9 days ago [-]
I worked for a Mac vendor at the time, and Copland never got beyond slideware for us. It was ridiculously ambitious: an (Apple developed) microkernel that would have a MacOS classic server, a multiuser next gen MacOS, plus even claims it could be possible to have an NT server on it. Complete UI customizability, etc, etc.

Essentially they were promising something comparable to NT 4, from an organization with a fraction of the team Microsoft used to deliver it.

vondur 9 days ago [-]
As mentioned in the article, the main reason for failure was trying to put more modern tech into an operating system that had to support non 32 bit processors like the 68020 which weren't fully 32 bit. Also, I don't believe that the older 68k processors supported virtual memory. Maybe if they had just targeted PowerPC CPU's Apple may have a chance with Copland.
em-bee 9 days ago [-]
i love this comment to the article about the attempt by atari:

Atari MiNT! It was an attempt to bolt UNIX semantics on top of TOS, which itself was already a weird mashup of CP/M and DOS. It ran on 68k ST boxes, and was about as bonkers as you'd expect, in ways that I can summarize with the pathname "U:\DEV\NULL".

st3fan 9 days ago [-]
Copland did not fail, it never shipped. It was one option on the table but they picked next step instead.
bunderbunder 9 days ago [-]
I'd argue that never even getting to shippable state is generally a worse failure than spending a few years on the market but never really taking off in any serious way.
atombender 9 days ago [-]
How's that not failure? Copland was intended to be shipped as Mac OS 8. It did not ship. It was canceled, with almost all the money and work put into it (with the exceptions of self-contained tech like QuickTime) being completely wasted. It did not succeed. So it failed.
sologoub 9 days ago [-]
Reading this made me think of how iOS and macOS are evolving - the UIKit and iOS apps on Mac.

Strategy feels similar - make things compatible enough and force apps to adapt somewhat. Of course this is a gross oversimplification, but who knows.

zackmorris 9 days ago [-]
I witnessed all of this since I started using Macs around '84/'85 and programming them around '89. I'm still in mourning about:

* Since Classic MacOS (OS 9 and below) didn't have a command line, it had GUIs for tweaking system settings. Better yet, it had a budget for preventing user interface issues in the first place. The user experience on Classic MacOS was simply better than anything we have today, on any platform (including iOS - and yes I realize this is subjective). The flip side is that the platform evolved faster until the late 2000s because developers could tinker more freely. Since the vast majority of users are not programmers, I don't think this was a win. To me, something priceless was lost, that may never be regained even with the incubator of the web pushing the envelope.

* I often wish that Apple had made a Linux compatibility layer. That entire ecosystem of software is simply not in the Mac fanbase's radar. This isn't such a huge issue now with containerization, but set everything back perhaps 10-20 years. Apple did little to improve NeXT (to make it something more like BeOS, or the Amiga). We really needed an advanced, modern platform like Copland or A/UX like the article said. But in the end, Steve Jobs knew that didn't really matter to like 99% of users, and he was probably right. Still, I'm in that lucky 1% that sees the crushing burden of console tool incompatibilities and an utter lack of progress in UNIX since the mid 90s.

* Much of the macOS GUI runs in a custom Apple layer above FreeBSD (rather than using something like X11). I'm not really convinced that the windowing system is that optimized, because it used to use a representation similar to PDF. So for example, I saw weird artifacts and screen redraws back when I was doing Carbon/Cocoa game programming, especially around the time OpenGL was taking off. Quartz is powerful but I wouldn't say it's performant. A 350 MHz blue & white iMac running OS X had roughly the same Finder speed as an 8 MHz Mac Plus running System 7 or a 33 MHz 386DX running Windows 95. Does anyone know if the windowing system is open source?

I could go on, in deeper detail, but it's futile. I think that's what I truly miss most about Classic MacOS. If you ever watch a show like Halt and Catch Fire, there was a feeling back then that regular folks could write a desktop publishing application or a system extension (heck whole games like Lunatic Fringe ran in a screensaver) and you could get Apple's attention and they might even buy you out. But today it's all locked down, we're all just users and consumers.

I still love the Mac I guess, and always come back to it after using the various runner ups. But I can't get over this feeling that it stopped evolving sometime just after OS X came out, almost 20 years ago. There is this gaping void where a far-reaching, visionary GUI running on top of a truly modern architecture should be. All we have now is a sea of loose approximations of what that could be. I wish I knew how to articulate this better. Sorry about that.

doggydogs94 9 days ago [-]
Classic MacOS did have a command line. You had to install it separate. The command line was the Apple development tool MPW.
yarrel 9 days ago [-]
Because it wasn't simply copying something from the 1970s like the Linux and NT kernels.
9 days ago [-]
simonsays2 9 days ago [-]
NT was not based on OS/2. The author is misinformed.
acdha 9 days ago [-]
Not only was it based on OS/2 it even supported some of the OS/2 APIs for awhile, along with things like the HPFS filesystem:

https://web.archive.org/web/20090210125723/http://www.micros...

https://support.microsoft.com/en-us/help/100108/overview-of-...

See https://en.wikipedia.org/wiki/OS/2#1990:_Breakup for more information about how they diverged over time.

Aloha 9 days ago [-]
I think it still does support OS/2 console applications.
acdha 8 days ago [-]
I thought that was removed in the XP/2003 era:

https://web.archive.org/web/20041019061658/http://support.mi...

skissane 9 days ago [-]
The statement "NT is based on OS/2 3.x" is misleading. NT isn't based on OS/2 3.x, it was originally meant to be OS/2 3.x, until the IBM-Microsoft divorce. But, despite being originally intended to be OS/2 3.x, there was not much OS/2 1.x/2.x code in it. The main areas of inherited code were HPFS and the OS/2 compatibility subsystem, neither of which ended up being core features, and were dropped in newer releases. Outside of those, there was very minimal code inherited from 1.x/2.x. "NT is based on OS/2 3.x" makes it sound like NT inherited more of the design or implementation of OS/2 1.x/2.x than it actually did.
lproven 9 days ago [-]
Allow me to enlighten you a little bit.

"The family link between OS/2 and Windows NT" https://liam-on-linux.livejournal.com/54138.html

"Follow-up: the family links between DOS, OS/2, NT and VMS" https://liam-on-linux.livejournal.com/54464.html

There's a citation in that 2nd blog post. Well, there are lots, but the IT Pro Mag one goes into detail about the OS/2 and VMS connection.

Enjoy.