NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Mark 1 Forth Computer (2006) (aholme.co.uk)
ChuckMcM 1167 days ago [-]
In my opinion I think everyone who considers themselves a "software engineer" should be required to build a computer out of discrete logic. Of course these days doing that with an FPGA would count too, assuming you don't use too many pre-canned IP blocks[1]. My reasoning here is that understanding a computer at this level really helps understand programming, and perhaps more importantly the translation between what you want a program to do and how the computer would actually do it.

That said, some nits on the author's description, what he calls "micro code" are simply instructions. When you look at computer architecture texts you will see that architectures that implement instructions with logic are "non-microcoded" and architectures that implement instructions of one width as a sequence of instructions of a different width, are "microcoded". The trade-off is that that former executes operation codes faster, the latter can be more efficiently implemented. A great real world example to study here are the PDP-8 (straight opcodes) and the PDP-11 (microcoded).

The most amazing thing to me is that today you can buy something like the ULX3S[2] and with a fully open source tool chain for the FPGA build a processor (RISCV) that boots Linux multi-user.

[1] Using a pre-made ALU is fine, starting from a previous CPU, less good :-).

[2] https://www.crowdsupply.com/radiona/ulx3s

qayxc 1167 days ago [-]
> My reasoning here is that understanding a computer at this level really helps understand programming, and perhaps more importantly the translation between what you want a program to do and how the computer would actually do it.

This logic would've worked in 1985 and maybe, just maybe until 1993. Modern CPUs cannot be programmed at the level that TTL provides. The µArch itself has long since become an abstraction in and of itself.

Depending on the instruction set, there's often no way to predict beforehand how certain data- and code paths will behave. That's why modern compilers use optimisation techniques such as profile guided optimisation.

If you think that certain µArchs are easier to understand in that regard than others (e.g. x86 vs ARM), think again. There's inaccessible embedded hardware [0], and SoCs hidden behind proprietary RTOSes [1] that prevent direct access.

Detailed knowledge about building the hardware is therefore only of very limited use in real-world applications; unless you plan to become a firmware-, driver-, or compiler developer.

Even in the embedded world, data sheets can be deceiving and ultimately knowing your (compiler-) toolchain and knowing how to test the behaviour in practise is far more valuable.

General knowledge should be required, yes, but going so far as to build an actual computer "from scratch" isn't as helpful as it might seem.

[0] https://news.ycombinator.com/item?id=25801500 [1] https://en.wikipedia.org/wiki/VideoCore

TheOtherHobbes 1167 days ago [-]
Agreed. The distance between CS abstractions has widened to the point where the lower level abstractions are only relevant to the people who build them.

There's a close and obvious relationship between C statements and the PDP-11 instruction set. And at least one PDP-11 series processor was implemented with gate-level logic with no microcoding. So anyone with time to spare could copy that. It's not a trivial job, and it's probably more time than it's worth. But it's not unimaginably hard either.

There's no obvious relationship at all between (say) Python and the gate-level internals of any mainstream modern CPU. There are at least four (five? six?) levels of abstractions between them.

It's useful to build a generic gate-level processor to have some notional idea of what a computer is. And of course it's essential if you go into processor design. But µArch details are a graduate-level topic and not something most developers need to worry about.

mulmen 1167 days ago [-]
The suggestion is not to build the computer. It is to learn how to build a computer.

The software manufacturing industry could do with some more silicon sympathy.

See: Wirth's Law.

https://en.wikipedia.org/wiki/Wirth's_law

qayxc 1167 days ago [-]
> The software manufacturing industry could do with some more silicon sympathy.

These are still orthogonal concepts. First of all, it's all about cost: if the operational costs of running a piece software over the lifetime of said software are significantly lower than the cost of optimisation, there's nothing to be gained from doing the latter.

Developer time is more expensive than a faster CPU, additional RAM or slightly higher power usage. Back in the day, the iron was expensive and developer time was cheap. Today it's the other way around, so saving on development time is more important than optimising for specific hardware.

Hardware-level optimisation becomes especially bogus if your deployment target is data centres - you often don't even know the deployment target beforehand (could be an ARM CPU, could be an x86 CPU). With many modern SoCs (see the M1), it's even impossible to optimise at that level, since you don't even get the required hardware specs and simply using Apple's libraries will get the best results.

So my argument still stands: there's litte benefit in general from going into such detail. Feel free to disagree, but developments like the Go language, SoCs, and GPGPU compute seem to point another direction.

> It is to learn how to build a computer.

And how does that translate to actual hardware then? Very poorly still. The fact is that unless you work with certain embedded systems, you never get close enough to the hardware to benefit from this.

This is time better spent on learning how to use your tools and to learn how to benchmark and optimise based on algorithms and data structures instead, because that's where the inefficiencies start.

Besides, most programming languages are so far removed from the hardware, that there's absolutely nothing you can do to optimise (JS, Python, Go, Java, C#, Haskell, LISP, etc.).

trasz 1167 days ago [-]
Developer time is expensive, sure. But poorly performing software, while perhaps sparing the original developer’s time, wastes time for users and other developers.
qayxc 1166 days ago [-]
I agree 100%. I just don't share the sentiment that poorly written software is caused by a lack of detailed low-level understanding of the hardware.
oivey 1166 days ago [-]
It’s pretty useful to know some low level computer engineering if you’re writing high performance software. Having some understanding as to how memory, cache, and threads work for your target platform is pretty important in a lot of applications. That sort of knowledge gives you a big head start when learning to program GPUs, too.
williamdclt 1166 days ago [-]
The large majority of us (easily above 90%) do not write high performance software, or program GPUs. I'm not saying it's useless to understand what's happening low-level, but pretending every should be required to have this knowledge is like asking all SE to be able to do deep learning, it's just not relevant for most people.
oivey 1166 days ago [-]
Low level programming isn’t like some disused area of mathematics that was popular in the early 20th century; it underpins everything we do. Your OS, your network stack, your graphical desktop, your browser, probably your favorite language’s interpreter, all depend on writing high performance code. At any point you’re probably a single step away from some C.

Languages like Python made it much easier to program but didn’t put as much thought into performance. I think the future of computing will be in languages that manage to be pleasant to write like Python while still exploiting low level hardware features. Rust, which is like C++ that doesn’t suck, and Julia, which is like Python with cleaned up semantics to allow easier compilation, come to mind. The economics of a few people developing high performance languages that everyone can use just makes too much sense. More cloud computing may be cheaper than more devs, but it still isn’t cheap in the absolute sense.

williamdclt 1166 days ago [-]
I agree with your comment, I don't think we say different things. In particular, cherry-picking something you said:

> The economics of a few people developing high performance languages that everyone can use just makes too much sense

I agree: high performance is the job of a _few_ people. It doesn't hurt to know basics for it for everybody else, but even then... that's just one area out of many a dev could dive into. Maybe security or virtualisation or OS programming or whatever would be just as useful or more. My point is that we can't pick a random subfield of software engineering and say "everybody should be skilled at that"

oivey 1166 days ago [-]
We’re probably not that far apart. I do think in the future that more people will be concerned at least with basic low level optimizations because some “next gen” languages make it so much easier. Part of what makes it hard now is that C lacks many of the very useful abstractions and features that have matured over the last few decades.
segmondy 1167 days ago [-]
I agree with your opinion, but even suggesting for folks to learn C or even assembly language today can often have you labeled an old timer/gate keeper.
userbinator 1167 days ago [-]
"gate keeper"? More like "gate opener"... I have a hypothesis that some of the tech industry has vested interests in not letting people learn too much of the details, because then they would be more informed and not easily lead by the marketing lies and half-truths on which much of the industry is based. Knowledge is power.
quantified 1167 days ago [-]
Can, I agree. They also can take you into real guru territory, I think that’s the angle to work.
the_only_law 1167 days ago [-]
Interestingly, this is partially how I realized I wasn't actually that great at programming. I really have just never been very good with digital logic and have failed to really grasp it multiple times. Starting 6 or 7 years ago, attempting to design a 6502 computer, and could not for the life of me, figure out how to design the address decoding how I wanted, and more recently, trying to get into FPGA's. I'm not sure what it is, as at a very broken down, individual level everything makes sense, but as soon as it jumps into composing them into anything (i.e. an address decoder or even just simple digital logic devices) I get lost and have to just sit around and have to drill the details down on paper for me to remotely get it.
mwcampbell 1167 days ago [-]
> I wasn't actually that great at programming.

Do you (or did you) write software that people find useful? That's what matters most.

analog31 1167 days ago [-]
A possible middle ground is to do something with a primitive micrcontroller, where the architecture isn't terribly complicated but at least it requires learning about things like binary numbers and how numbers actually get passed from one piece of hardware to another. While they are much maligned, the 8 bit PIC chips are still available, and are extremely bare bones. I think there's a good reason why they were the favorite of hobbyists for a long time, before Arduino came along.

My college physics curriculum included building a 4 bit minicomputer from TTL (in the early 80s). While we were given the design, we still had to understand it in order to make it work, because nobody's machine ever came up on the first try.

dfox 1167 days ago [-]
I second the suggestion that every software engineer should at least be able to imagine how they would go about building computer from discrete logic.

When commercial computers were built from discrete logic in this manner, there was distinction between "horizontal" and "vertical" microcode. With horizontal being what most people would today call microcode, ie. implementing most of the control logic as one somewhat wide memory array with datapath being mostly directly controlled by bits coming form such memory. The vertical approach essentially involved designing simplistic RISC-like CPU with fixed width instructions which then ran interpreter for the actual instruction stream of the machine (ie. what the author's CPU does). And obviously there are various middle-ground approaches. The reason for this is that control-stores for purely horizontal microcode are not that dense and involve memory with weird and large word lengths (which is somewhat inconvenient for implementation from commercially available (P)ROM chips)

Philip-J-Fry 1167 days ago [-]
I really want to get closer to the metal, like programming an FPGA or properly learning assembly (I know that's like 2 completely different skill sets). But I never really know where to start.

Is there any like definitive book or website to learn any of this stuff?

IHLayman 1167 days ago [-]
There is an interesting game I played in the past week called MHRD that has you start with NAND gates and gets you all the way to a CPU, challenging you to use as few NAND gates as you can each step. That was pretty fun and reinforced some of the ideas (although it elides a lot of the JK Flip Flop and RS Latch stuff by abstracting it into their own data unit). I agree with other commenters talking about eater.net and NANDtoTetris as they are both great instructional resources (Ben Eater’s YouTube series is extraordinary).

As far as learning assembly, you may want to try Zachtronics games TIS-100, Shenzhen.IO, and Exapunks as it does offer a bit of the instinct for assembly coding. To learn 6502, I could suggest finding yourself a good C-128 emulator. The C-128 how I learned both basic and assembly (using their MONITOR command) and the 6502 is pretty easy to understand even if underpowered.

uncledave 1167 days ago [-]
qayxc 1167 days ago [-]
IMHO a great way to get into this is programming old game consoles.

The NES or Gameboy are great little machines that are easy to understand and have tons of guides, tutorials, and tools available.

With very little investment, you can even get run homebrew software on actual hardware.

krallja 1167 days ago [-]
eater.net/6502
msla 1166 days ago [-]
I'd love a tutorial which takes a non-expert through the task of building a CPU with out-of-order execution and other modern features.

The space for building 6502-era CPUs is saturated; let's move on to the PowerPC or Pentium Pro, please.

_nalply 1166 days ago [-]
By the way, the 6502 (yay Commodore 64!) has something like a microcode: a PLA. This describes it a lot better than I ever could.

https://news.ycombinator.com/item?id=5353198

Visual 6502 made me understand my Commodore 64 in an unexpected deep level. I experienced epiphany and bliss: I can begin to understand how computing works from GraphQL down to electrons.

http://www.visual6502.org/JSSim/index.html

harperlee 1167 days ago [-]
You mean build or design?

If it’s the former, I generally agree. It does not take long - I worked through nandgame.com over a slow afternoon and it helped me refresh things that I remembered fuzzily.

If it’s the latter, I think it is too high an ask nowadays where so many people work at a high abstraction level.

AussieWog93 1167 days ago [-]
Hell, reading the Intel programming manual would be a good start. How many people writing x86 code can't name a single x86 instruction?
tyingq 1167 days ago [-]
The homebrew cpu "web ring" page referenced at the bottom of the article is a nice catalog of other (mostly) TTL only CPUs: https://www.homebrewcpuring.org/ringhome.html

Note: The site's a little underpowered, so posting a link here may slow it down :)

Jkvngt 1167 days ago [-]
Forth is an amazing language, too bad we're so far from the hardware most of the time these days.
the_only_law 1167 days ago [-]
I have a an old network protocol analyzer from an HP-Aglient subsidiary that's supposedly scriptable in Forth. Actually out of all my old protocol analyzers this one is probably the coolest from a technical perspective. It has six 68k's in it, and is heavy as hell. Unfortunately I never get to use it unlike all my other one's because it only has ISDN BRI interfaces, and most of my hacking involves PRI (T1/E1) or V series interfaces.
reaperducer 1167 days ago [-]
I've seen a couple of old scientific calculators/pocket computers on auction sites lately that can be programmed in Forth. Very tempting to try.

I've also seen ones that can be programmed in C, BASIC, and even Pascal. Again, magnets for my disposable income.

trasz 1167 days ago [-]
Are you sure it’s literally Forth? If so, can you share some keywords?

The closest thing I know of is HP 48 - not Forth, but quite Forth-like, and it definitely tought me a thing or two.

opencl 1166 days ago [-]
Some of the old HPs (41, 71B, 75C) had Forth interpreter ROMs available as add-on modules.
deepspace 1167 days ago [-]
So, the author implies in the article that he is not very experienced with logic design, and it certainly shows, but i.m.o. most egregious error is the use of EPROM.

I was involved in embedded design in the late 80s/early 90s and as soon as Flash memory became available, we switched all our product lines over within a year, even though the cost was still 10x higher. UV erasable EPROM was THAT much of a pain to deal with.

It blows my mind that the author was still using EPROM after 2000.

jfk13 1167 days ago [-]
Very cool! Might be worth adding a note of the date of the page. (Original copyright is 2003; updated 2006.)
loudouncodes 1167 days ago [-]
Wow! A webring! Haven’t seen one of those in a zillion years.
jacquesm 1167 days ago [-]
A HN member is trying to revive them.
throwawaybutwhy 1167 days ago [-]
(2003) or (2006). First recorded on Wayback back in 2014: https://web.archive.org/web/20140124074811/http://www.aholme...

HN's fascination with Lisp and Forth is perfectly understandable. Combined with DIY hardware, this sounds like a sweet spot for intellectual... cough... stimulation.

mulmen 1167 days ago [-]
This is really cool. I love the physical construction as well. Is there more detail about building the actual machine?
peter_d_sherman 1167 days ago [-]
>"This computer has no microprocessor. The CPU is discrete TTL logic."
detaro 1167 days ago [-]
the "no CPU" in your title is kinda wrong that way: it does have a CPU, it's just not a single integrated circuit or microprocessor. Maybe "TTL only" or "TTL chips only", without the "no CPU" part is more accurate, if "no microprocessor" doesn't fit?
peter_d_sherman 1167 days ago [-]
I have changed the ending of the title to (TTL only, no microprocessor)...
alisausaaaa 1167 days ago [-]
Wanna have hot-lovin' conversations? You’re on the right way! - https://adultlove.life
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 19:35:21 GMT+0000 (Coordinated Universal Time) with Vercel.