NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Google offers free fabbing for 130nm open-source chips (fossi-foundation.org)
Taek 1378 days ago [-]
I've spent some time in the chip industry. It is awful, backwards, and super far behind. I didn't appreciate the full power of open source until I saw an industry that operates without it.

Want a linter for your project? That's going to be $50k. Also, it's an absolutely terrible linter by software standards. In software, linters combine the best ideas from thousands of engineers across dozens of companies building on each other's ideas over multiple decades. In hardware, linters combine the best ideas of a single team, because everything is closed and proprietary and your own special 'secret sauce'.

In software, I can import things for free like nginx and mysql and we have insanely complex compilers like llvm that are completely free. In hardware, the equivalent libraries are both 1-2 orders of magnitude less sophisticated (a disadvantage of everyone absolutely refusing to share knowledge with each other and let other people build on your own ideas for free), and also are going to cost you 6+ figures for anything remotely involved.

Hardware is in the stone age of sophistication, and it entirely boils down to the fact that people don't work together to make bigger, more sophisticated projects. Genuinely would not surprise me if a strong open source community could push the capabilities of a 130nm stack beyond what many 7nm projects are capable of, simply because of the knowledge gap that would start to develop between the open and closed world.

m12k 1378 days ago [-]
I've been thinking about this a lot lately. In economics, the value of competition is well understood and widely lauded, but the power of cooperation seems to be valued much less - cooperation simply doesn't seem as fashionable. But the FOSS world gives me hope - it shows me a world where cooperation is encouraged, and works really, really well. Where the best available solution isn't just the one that was made by a single team in a successful company that managed to beat everyone else (and which may or may not have just gotten into a dominant position via e.g. bigger marketing spend). It's a true meritocracy, and the best ideas and tools don't just succeed and beat out everything else, they are also copied, so their innovation makes their competitors better too - and unlike the business world, this is seen as a plus. The best solutions end up combining the innovation and brilliance of a much larger group of people than any one team in the cutthroat world of traditional business. Just think about how much effort is wasted around the world every day by hundreds of thousands of companies reinventing the wheel because the thousands of other existing solutions to that exact problem were also created behind closed doors. Think about how much of this pointless duplication FOSS has already saved us from! I really hope the value of cooperation and the example set by FOSS can spread to more parts of society.
pradn 1378 days ago [-]
Classical economics thinks of people as "rational utility-maximizing actors", which doesn't approximate reality in quite a lot of ways. There's been a move toward more sophisticated models - like that people minimize regret more than they seek the optimal reward ("rational actors with regret minimization.") This switch to more complex underlying models is similar to computational models used in computer science. Algorithms used to only be designed for the RAM computation model, which doesn't model real-life CPUs, which have caches and where not all operations take unit time. Now, there's a wide variety of models to choose from, including cache-aware, parallel, and quantum models. You often get better predictors of the real world this way.

There has been quite a lot of study in economics about cooperation. My favorite is Eleanor Ostrom's work on "the commons". She observes that with a certain set of rules, discovered across the world and across varying geographies, people do seem to be able to cooperate to maintain a natural resource like a fishery or a forest or irrigation canals for hundreds of years. Her rules are here (https://en.wikipedia.org/wiki/Elinor_Ostrom#Design_principle...).

pas 1377 days ago [-]
The problem with the "rational agent" model is that it's a tautology. Yes sure everyone wants more utility, great, but everyone's utility function is slightly different. As you say some are risk takers pursuing insane rewards/yields/profits with low probability, while others are super risk-averse conservative in their choices, etc.

That said, economics doesn't rest on this. Macroeconomics doesn't care, nor micro/labor/health/sports/developmental econ either.

Sure trying to predict how someone (and more interestingly how groups) will behave based on their psych profile is an important area of research, but the aforementioned subfields of econ already have well working assumptions about how people will behave in the aggregate, even if they can't derive it from some exact utility function.

mlindner 1377 days ago [-]
> The problem with the "rational agent" model is that it's a tautology. Yes sure everyone wants more utility, great, but everyone's utility function is slightly different. As you say some are risk takers pursuing insane rewards/yields/profits with low probability, while others are super risk-averse conservative in their choices, etc.

There's nothing in the rational agent model that assumes that everyone has the same definition of utility function. Different people want different things.

pas 1376 days ago [-]
Yes, of course, but that's what I mean by problem. Saying people are (bonded rational) utility function maximizers doesn't give us a predictive theory, it just means in a fancy way people do what they do for "reasons", and everyone usually have different set of reasons.

Of course, it's fine as a very-very general fundamental theory, if you then want to study how people's revealed and non-revealed (old names for implicit and explicit) preferences aggregate into a utility function. (There's a whole bunch of math about pairwise comparison matrices. Lately there's some movement in that space about using perturbation to model inconsistencies in preferences, etc.)

pradn 1376 days ago [-]
Yes, if people are rational actors, whatever they do is rational according to their private utility function. So, if anything they do is rational, then how is it "rational", which supposedly depends on some objective metric?
pas 1376 days ago [-]
It's rational because objectively the agent does what is best for it. Basically that's the definition of being rational. But without any qualifications, constraints, or additional context this description of decision theory is not really useful. (That was the point I tried to convey in my original comment.)

What even the general theory is useful for, is to quickly reduce the question of the very general behavior/decision problem in to "show me your utility function and I'll tell you what you will do". Of course then that becomes a problem. How to model utility, how to formalize expected utility over all possible actions. What about priors (as in path-dependence, so do we want to encode that into the utility function - and constantly dynamically update the function, like a Bayesian agent would update its beliefs - or somehow manage it separately)?

...

Basically I was trying to persuade people to don't say things like "economists are dumb, because people are not rational utility maximizers", because that's a false implication. Of course they are, but - just like you said - this doesn't help us better predict people's (agents') behavior, just gives us a new task to model their utility functions. And that's where regret comes in. (Which is basically risk-aversion, which is what behavioral economics studies. People are not symmetric about positive and negative expectations. They value avoiding negative-utility events more than they value encountering positive-utility events. And of course with this insight now we can build better economic models - for example we can use this to "explain" why wages are more sticky - especially downward - than we would expect, also with some handwaving we can explain why in times of crisis people are let go instead of decreasing their hourly compensation, why some people are so gullible [peer pressure, confirmation bias ~ cognitive dissonance], and so on.)

webmaven 1377 days ago [-]
> Classical economics thinks of people as "rational utility-maximizing actors", which doesn't approximate reality in quite a lot of ways. There's been a move toward more sophisticated models - like that people minimize regret more than they seek the optimal reward ("rational actors with regret minimization")

Even when entities (corporations, for example) are trying to maximize utility, and an optimal decision is desired, there are issues with how much time and resources can be spent making decisions, so optimality has to be bounded in various ways (do you wait for more information? Do you spend more time and compute on calculating what would be optimal? etc.).

agumonkey 1377 days ago [-]
> "rational actors with regret minimization."

I keep seeing something like this in workplaces people don't join forces they avoid friction. Until something (crisis) or someone (capable leader) flips the thing on its head.

callalex 1376 days ago [-]
Follow the incentives. Any individual employee is massively incentivized to keep their job. There is a slight chance that a certain leader may be incentivized to bring success to the company. That is the exception, not the norm. I learned this the hard way by caring about companies that didn’t care about me.
agumonkey 1376 days ago [-]
'keeping your job' implying submission to social pressure to be accepted and nothing more ? I'm not sure I got your whole argument, so if you have more details, I'd be happy to read :)
CabSauce 1378 days ago [-]

  In economics, ... but the power of cooperation seems to be valued much less
I'm not sure that I agree with this. The creation of firms and trade are both cooperative. They aren't altruistic though. (I'm not disagreeing with your overall point, just that cooperation isn't valued in economics.)
koheripbal 1378 days ago [-]
Agreed - partnerships abound in the real business world. It might be under-modeled in economic theory and not well taught in business school, but the value of networking is and having high-level industry relationships is the life-blood of a good business leader - specifically because of partnering and information sharing.
38e7ruhrhdh 1377 days ago [-]
If anything economics seems like the smarter field when compared to the open source community because the focus is on resource evaluation, allocation, and returns. Open source could probably use more talent but the missing lynchpin is obviously a reward structure. The techno-optimism of the 80s and 90s doesn't scale, we need licensing structures that entitle the parties doing the unprofitable stuff to a share of the profits their work produces. Resisting commercialization has weakened the community not made it stronger, open source can still be free without being free labor.
zhengyi13 1377 days ago [-]
If you think there's no reward structure, perhaps it's because you can't see it?

https://en.wikipedia.org/wiki/Gift_economy

Spooky23 1377 days ago [-]
It’s the lifeblood of the leader, but not the firm.

A good percentage of being a good leader is being able to pick up the phone and get stuff done, which is often contrary to what the firm wants.

dclowd9901 1377 days ago [-]
It’s not cooperation at the human level, and that makes all the difference. “Altruism” is a convenient dismissal of a principle (survival through cooperation) that has actually allowed humanity to survive and thrive thus far.

The trick to making it all work is that knowledge and/or tooling (the means of production) ought not be proprietary, but product or service absolutely can be.

dimva 1378 days ago [-]
FOSS is super competitive, too. Teams splinter off and fork projects, new projects start up that try to dethrone the industry leaders, people compete for status/usage, etc. The main difference is that the work product/process is public and so ideas spread much more rapidly.
JDEW 1377 days ago [-]
Yes but competing in a cooperation is different than cooperating in a competition.
naringas 1378 days ago [-]
but let's not forget the non-material nature of software allows this, hardware will always be a physical (material) artifact.

however the line does blur when talking about blueprints and designs.

in any case, I think that free software movements are a sociological anomaly, I wonder if there is any academic research into this from an antropological or an historical economics viewpoint.

also, it seems to me that in some sense the entire market works in cooperation, just not very efficiently (it optimizes for other things than efficiency and is heavily distorted by subsidies and tariffs)

nwallin 1378 days ago [-]
> but let's not forget the non-material nature of software allows this, hardware will always be a physical (material) artifact.

Sort of?

I tried to get into FPGA programming a while ago, and it turns out the entire software stack to get from an idea in my brain to a blinking LED on a dev board is hot garbage. First of all, it's insanely expensive, and second of all, it really, really sucks. Like how is it <current year> (I forgot what year it was, but it was 2016-2018 timeframe) and you've tried to reinvent the IDE and failed?

I think projects like RISC-V and J Core are super cool, but I couldn't possibly even attempt to contribute to them based on how awful the process is.

HansHamster 1377 days ago [-]
Check out IceStorm (+Yosys, nextpnr...) for a pretty complete open source toolchain for ice40 FPGAs. It's really amazing to go from the infuriating to use vendor tools with all their quirks and bugs to a single 'make' that just generates the bitfile without using the broken IDE or complaining about licenses...

The FPGAs are also relatively inexpensive (<5-10€ in single quantities depending on the model) but are on the lower end in terms of features and performance.

The tools should also support the more powerful ECP5 FPGAs, but I haven't tried them yet.

jacquesm 1377 days ago [-]
Interesting, thank you for the pointer. I've been playing around with FPGAs a bit and this sounds like it might just be what I was looking for. Is there any starter board that you'd recommend for this workflow?
HansHamster 1376 days ago [-]
I've only used the bare chips on custom boards, so I have no experience with the eval boards.

A popular board a few years ago was the iCEstick for about 20€, but I can't find it anywhere for that price now. The ICE40 breakout boards from Lattice are also more expensive now for whatever reason. Olimex has a board with the HX8K that is more reasonably priced, but unfortunately doesn't have an onboard programmer...

For the ECP5, the ULX3S board looks interesting. It's not exactly cheap, but also has more features than just buttons, LEDs, and pin headers like on the other boards.

aswanson 1378 days ago [-]
Same. Started off in love w FPGA design in college. Software design is light-years ahead of that area in terms of tool maturity, functionality and freedom.
1378 days ago [-]
lopsidedBrain 1377 days ago [-]
The material nature of hardware is barely any higher than that of software. Yes, you cannot just copy bits around, but you can take the same design and fabricate millions of chips for pennies. The cost of production for any given chip of Silicon has almost nothing to do with its material cost. It has everything to do with the amount of design work that went into it, just like software. None of that appreciably changes the benefits of cooperation vs competition.
jacquesm 1377 days ago [-]
There is no historical precedent other than the media industry, which uses the copyright system for an entirely different purpose: to milk as much as possible out of a single investment in time and effort, usually by someone else than the original creator.

The whole thing revolves around the marginal cost of an extra copy of a piece of software being close to zero, no other critical industry gets such economies of scale. Making more chips still requires huge capex and opex.

m12k 1378 days ago [-]
I'm curious to hear what you mean when you say the entire market works in cooperation? I mean, strategic partnerships happen, and companies work as suppliers for other companies. But that's not the market - the market is where someone wanting to buy something goes and evaluates competing products and picks the one they want to buy. It's pretty comparable to natural selection, where the fittest animals survive and the fittest companies get bigger marketshare while the least fit companies go bankrupt, and the least fit species go extinct. So I guess you could say that the market functions as an ecosystem - maybe the word you were looking for was 'symbiosis' rather than cooperation? Cheetah's aren't cooperating with lions, they are competing - but relative to the rest of the ecosystem, they exist in a form of symbiosis.
crdrost 1378 days ago [-]
There are various forms of cooperation too. Lions merge with cheetahs to hopefully starve the leopards out, then they domesticate emus and antelopes so that they can survive scorching the rest of the savannah so they don't have to deal with those pesky wild dogs. Then they see tigers doing the same in India and say “hey let's agree that you can run rampant through Africa if we can run rampant through India, but we agree on these shared limits so that we are not in conflict.”

A favorite example is that us legislation to ban advertisements for smoking was sponsored by the tobacco industry. They were spending a lot on ads just to keep up with the Joneses; if Camel voluntarily stopped and Marlboro continued, then Camel would go the way of Lucky Strike. They would rather agree to cut their expenditures! But they needed to make sure no other young tobacco whippersnappers came in and started showing a couple ads which they would have to both best, reigniting the war.

Open source is interesting because it seems to be a marvelous unexpected outcome from the existence of the corporation. Individual people start to work at corporations and are aware that whatever they produce at that corporation is mortal, it will die with that corporation if that corporation decides to stop maintaining it or if that corporation itself folds. The individual wants his or her labor to survive longer, to become immortal. This company could go out of business and I will still have these tools at my next job. So in some sense layered self-interests create a push towards corporate cooperation.

CabSauce 1378 days ago [-]
Trade and money are just tools to facilitate cooperation. They incentivize agents to cooperate by sharing the value of that cooperation.
visarga 1377 days ago [-]
'Survival of the fittest' when applied to groups implies cooperation as an essential skill between individuals. It's not all competition. The fittest group is not the one made of the most individually fit members, they also need to function well together.
jimmySixDOF 1377 days ago [-]
"selfish individuals beat altruistic individuals. Altruistic groups beat selfish groups. Everything else is commentary."

-David Sloan & E. O. Wilson

1378 days ago [-]
mannykoum 1378 days ago [-]
To me, it seems as less of a sociological anomaly and more of an example of the quality of production outside the established competitive norms of capitalism. There are multiple such examples throughout history. The gift economy wasn't born with FOSS development.

There is a lot of literature on the subject of cooperation, especially from anarchist philosophers (i.e. Mutual Aid: A Factor of Evolution, Kropotkin).

kilburn 1377 days ago [-]
> in any case, I think that free software movements are a sociological anomaly, I wonder if there is any academic research into this from an antropological or an historical economics viewpoint.

I don't have the proper background to make a strong case about this, but I feel like middle ages guilds would be closer to the open-source model than to the current "trades secrets" one, wouldn't it?

Likewise, I don't see farmers of ye olde times keeping their crop-growing tricks to themselves as secrets and so on.

Furthermore, we've known about many indigenous cultures where "the tribe" is regarded as more important than the individual, meaning that sociologically they should be more aligned with the open-source model than the capitalistic one, shouldn't it?

Again, I'm not an expert in the area, but it seems to me that our current society is more "historically anomalous" at the commoner level than any more socially-conscious one would be (i.e.: common people has leaned to greed/individuality these days than almost always in the past).

_zamorano_ 1378 days ago [-]
Well, I'm yet to see a free (as in beer) lawyer, helping people for free after his regular workday.
mr_toad 1377 days ago [-]
Software can be duplicated for near zero marginal cost. It’s a well studied phenomenon in economics.

https://en.wikipedia.org/wiki/Public_good_(economics)

novaRom 1377 days ago [-]
And corporate consumers can take advantage of public goods without contributing sufficiently to their creation.
lucbocahut 1378 days ago [-]
Great point. It seems to me the mechanics of competition are at play to some extent in open source: ideas compete against each other, the better ones prevail, and contribute to a better whole.
1377 days ago [-]
mercer 1378 days ago [-]
Whatever one might think of socialism, and I really don't mean to start a political discussion here, FOSS is an example for me that shows that at the very least we're not purely driven by competition.
shard 1377 days ago [-]
Haven't thought very deeply about this, but FOSS doesn't seem like socialism, as the capital being competed for is user attention. FOSS projects without enough capital does not gain enough developers, and falls into disrepair and obscurity. Not sure what a socialist FOSS movement might look like, but maybe developers would be assigned to projects as opposed to them freely choosing the projects to work on?
mercer 1377 days ago [-]
> Haven't thought very deeply about this, but FOSS doesn't seem like socialism, as the capital being competed for is user attention.

I think that's stretching the definition of 'competition' a bit too much.

First off, many people work on FOSS just because they want to, or because they believe that it's a good thing. Not because they want 'user attention'. Furthermore, I'm pretty sure that even the people that are motivated by user attention don't see it as a competition.

> FOSS projects without enough capital does not gain enough developers, and falls into disrepair and obscurity.

There are plenty of FOSS projects that do fine with just a handful of developers, or even just a single one. with perhaps the occasional pull request from others.

> Not sure what a socialist FOSS movement might look like, but maybe developers would be assigned to projects as opposed to them freely choosing the projects to work on?

Maybe I misunderstand, but I get the impression that you're thinking of 'state' socialism. I wouldn't say being assigned stuff is a core aspect of socialism.

All that said, it's not so much that I think FOSS is like socialism, but rather that FOSS is a counter-example to a common argument I hear from especially the more hardcore 'everything should be market forces' capitalists, which is that we're primarily driven by competition.

shard 1376 days ago [-]
Hi Mercer, interesting points.

But I think in general, the FOSS projects that thrive, that gather users which report bugs, that has an active community, are going to be the ones which attract developers, not necessarily because they seek fame and fortune, but just by the simple aspect that they are more likely to run into those projects, and be able to contribute to those projects more easily due to better documentation and active devs who can answer questions. It is the projects which compete for attention for their livelihood, like an emergent behavior, even if the developers don't explicitly target it.

Besides, developers are humans too, and they can feel pride when their contributions are recognized, and they can consider the work to be helping to pave the way to future employment. Contributing to large active projects give higher chances for both prospects.

Regarding whether it's state socialism that I'm thinking of, you're probably right. I've never been very clear on the definition of socialism. On the most superficial level, I understand socialism to be that the community controlled the means of production. So when applied to FOSS, that seems to mean that the whole community decides what projects to work on, and in the case where the community decided that something needs to be worked on, but not enough devs volunteer, then it seems like the work will need to be assigned.

Finally, as you said, it's not about FOSS being like socialism, but that they devs are not directly competing for big cash prizes that's the interesting aspect. I agree. There are not that many industries that are so fundamental to our society which has a large contingent of people doing critical work for free. Perhaps education comes close. Politics, with political power taking a similar role as attention for FOSS projects, but that's straying pretty far.

Ericson2314 1378 days ago [-]
The negligible unit economics of software mean that the successfulness of Free Software should be derivable from those old theories. The monopolistic and rent-seeking alternative that is the proprietary computer industry is also really far from some Ricardo utopia.

I don't mean to engage in some "no true capitalism" libertarian defense, but rather point out that a lot of fine (if simplistic) economic models/theory have been corrupted by various ideologies. A lot of radical-seeming stuff is not radical at all according to the math, just according to your rightist econ 101 teacher.

yolomcsuperswag 1377 days ago [-]
The economy _is_ cooperation.

I.e. a trade is when two parties engage in a mutually beneficial exchange.

unishark 1378 days ago [-]
Perhaps because competition relates to monopolies which is where governments intercede, and hence there is a demand for economic analysis.

The libertarians economists talk about cooperation in terms of spontaneous order. Milton Friedman had his story of the process to manufacture a pencil as "cooperation without coercion". Basically it's the "invisible hand" driving people to cooperate via price signals and self-interest. I don't know if there's much that can be done with the concept beyond that.

UncleOxidant 1378 days ago [-]
I've also worked on the hardware side a bit as well as in EDA (Electronic Design Automation)- the software used to design hardware. Since you already commented on the hardware side of things, I'll comment on the EDA side. The EDA industry is also very backwards and highly insular - it felt like an old boys club. When I worked in EDA in the late aughts we were still using a version of gcc from about 2000. They did not trust the C++ STL so they continued to use their own containers from the mid-90s - they did not want to use C++ templates at all so generic programming was out. While we did run Linux it was also a very ancient version of RedHat - about 4 years behind. The company was also extremely siloed - we could probably have reused a lot of code from other groups that were doing some similar things, but there was absolutely no communication between the groups let alone some kind of central code repo.

EDA is very essential for chip development at this point and it seems like an industry ripe for disruption. We're seeing some inroads by open source EDA software - simulators (Icarus, Verilator, GHDL), synthesis (yosys) and even open cores and SOC constructors like LiteX. In software land we've had open source compilers for over 30 years now (gcc for example), let's hope that some of these open source efforts make some serious inroads in EDA.

pizza234 1377 days ago [-]
> did run Linux it was also a very ancient version of RedHat - about 4 years behind

How is a 4 years old distribution "very ancient"? There's very likely plenty of Ubuntu 16.04 in the field, and there's nothing inherently wrong with that.

RedHat uses "old" kernels, if that's what you refer to with "ancient" but there are reasons for that, and they're also backported, they're not unupdated.

hajile 1377 days ago [-]
4 year old RHL is more like 8 years old for the rest of us. Stability above everything else has its price.
klyrs 1377 days ago [-]
FWIW a lot of c++ developers see std:: and TODO as synonyms. OTOH, having used some of Xilinx's tool, I'd be terrified to read the internals.
denkmoon 1377 days ago [-]
> std:: and TODO as synonyms

What does this mean? I've only been in the C++ game for 6 years and for me if I can get something from std rather than rolling my own or pulling in another library, I'm cheering.

klyrs 1377 days ago [-]
Yes they're wonderful to have, don't get me wrong! They're general-purpose and robust! But they're also insufficient for a lot of needs. One thing that comes up for me a lot is a lack of static-allocated structures, and sometimes there are optimizations that can be made by sacrificing certain functionality.
shaklee3 1377 days ago [-]
Lots of people replace STL with a faster one. See EA
CountSessine 1376 days ago [-]
EA’s STL had more to do with custom allocators and the fact that a lot of the platforms we were developing for 10 years ago didn’t have mmap or paged memory management (or at least nothing that they wanted to expose to us lowly user-mode gamedevs).
chii 1378 days ago [-]
> Want a linter for your project? That's going to be $50k.

the thing is that i think the open source software is a miracle that it even exists, and i don't find it strange that nowhere else has replicated the success. Because open source, at heart, is quite altruistic.

mercer 1378 days ago [-]
I think the principles underlying FOSS are found everywhere. It's just that the idea of 'markets' and 'transactions' being the ubiquitous thing has infected our thinking.
mannykoum 1378 days ago [-]
Wholeheartedly agree. In parts of society where our distorted ideas of "productivity" and "success" are absent, people share more freely. The island where my family is from comes to mind (Crete, Greece). There—in the rural areas—people were able to deal with the 2007-2008 crisis a lot more effectively than they did in the cities—esp. compared to Athens. What I observed is that if someone didn't have enough, the rest of the village would provide. There was a general understanding that that way the people with less would get back on their feet and help contribute to the overall pool.
mercer 1378 days ago [-]
Wonderful example!

I'd add that just looking at how families or circles of friends operate is also enlightening. The most cynical view is that these interactions are 'debt' based, but practically speaking that often isn't true either. I help my mother not because I calculate the effort she has invested in me or the value she brings me. I just do it because I love her and I don't need to think about how I've come to feel that way (which very well might be based on measurable behavior on her part).

bavell 1377 days ago [-]
IMO this has nothing at all to do with "distorted ideas of productivity and success" and everything to do with the rural vs urban lifestyle. There are plenty of similar examples like the one you gave that happen everyday here in the U.S. As a general human rule, smaller communities are more tightly knit and care more for each other.
mannykoum 1377 days ago [-]
That is definitely a factor. More complex social systems provide a degree of distance between you and the rest of the community (how can anyone keep up with having a sort of relationship with a few million others).

What is present in rural areas is a sort of respect and fear of exclusion (I think these two are reciprocal). But it is not impossible to have that on a local level in a large city. Having lived in both London and New York (similar population size, vastly different population density) I've seen this happen. In Manhattan specifically there is little respect for neighborhoods—comparatively—while in London residents care for their boroughs. Some direct actions that might have fostered that are simple things like community gardens/allotments which are very common in the UK.

that's my humble opinion, I'm not an expert in human social behavior.

chii 1377 days ago [-]
but does this only work when the number of people in the tribe is low, and everyone has some social accountability to each other?

The problem of a free-rider is ever present, and in big cities, that problem is evident and thus you don't get the sort of community effort that you see in a small village.

CountSessine 1376 days ago [-]
Yeah - see “Dunbar’s Number”

Beyond Dunbar’s Number, it’s not clear that you can employ mutual-aid theories like tit-for-tat to restrain free-riding and cheating.

ballenf 1378 days ago [-]
I think the truth is in the middle: there are many examples of secret processes throughout history and craftsmen jealously guarding processes except through apprenticeship programs where payment was years of cheap labor.
mercer 1378 days ago [-]
Oh yeah, it's not a binary thing. That said, in the case of craftsmen and their apprentices, I think the relationship was much less cold and transactional than, say, my startup friends who got a bunch of interns as cheap labor. At times perhaps, but not generally.
JakeCohen 1378 days ago [-]
No. We have an inherent sense of fairness and at the scale the average person would not work for free. It’s the opposite, FOSS has infected our thinking, causing young people to work for free for idealistic reasons while corporations take their work and make billions. Linus Torvalds should be a billionaire right now, more than Bezos and Gates.
mobilefriendly 1378 days ago [-]
It actually took a lot of hard work in the early days, to prove the model as valid and also defend the legal code and rights that underpin FOSS.
Ericson2314 1378 days ago [-]
"Miracle" as in tireless efforts of the GNU project decades ago, not random act of the cosmic rays, let's be clear.
prewett 1377 days ago [-]
You of little faith...

This is the record of the miracle of St. iGNUtius [0]. Back in The Day, before all the young hipsters were publishing snippets of code on npm for Internet points, Richard Stallman seethed in frustration at not being able to fix the printer driver that was locked up in proprietary code. While addressing St. iGNUtius, patron saint of information and liberty, he had a vision of the saint saying that he had interceded for Stallman, and holding out a scroll. On the scroll was a seal, and written on the seal in phosphorescent green VT100 font was the text "Copyright St. iGNUtius. Scroll may be freely altered one the sole condition that the alterations are published and are also freely alterable." Upon opening the seal, and altering it according to the invitation, Stallman saw the scroll split into a myriad of identical scrolls and spread throughout the world, bringing liberty and prosperity to the places it touched. Stallman hired some lawyers to write the text of the seal in the legal terms of 20th century America. Thanks to the miracle of St. iGNUtius, software today is still sealed with seal, or one of its variants, named after the later saints, St. MITchell and St. Berkeley San Distributo.

[0] A twentieth-century ikon of St. iGNUtius: https://twitter.com/gnusolidario/status/647777589390655488/p...

phkahler 1378 days ago [-]
It's far too easy for people new to this to overlook the importance of the Free Software Foundation.

The original RMS email announcing the project is a nice read. When placed in the context of his personal frustration with commercial software it can also be seen as a line in the sand.

edge17 1378 days ago [-]
Or it's artistic expression for a certain type of individual, and we have accessible art everywhere
x0 1377 days ago [-]
Isn't it a miracle! It's fantastic, I've been reading old computer magazines from the 80s and 90s on archive.org, and the costs to set yourself up back then were just astronomical. I remember seeing C compilers priced at $300, or even more.
noir_lord 1377 days ago [-]
I’m just 40, I started programming in 87, got serious around 93, I remember getting my hands on a copy of turbo pascal from a family friend - not expensive but was for a kid and it was amazing.

Now I grumble if it’s a pain to install a massive open source tool chain for free and these days it’s rarely painful.

I should stop and appreciate that more once in a while.

lowtolerance 1377 days ago [-]
I think a full copy of Visual Studio in the late 90s was like $700-800.
ddevault 1378 days ago [-]
This isn't true. Writing open source software is more profitable: it's cheaper (because everyone works on it) and works better (because everyone works on it).
BoysenberryPi 1378 days ago [-]
Do you have data to back this up? Because all accounts I've heard say that it's incredibly difficult to make money in open source.
whatshisface 1378 days ago [-]
The easiest way to make money in open source is to be a company that makes money from a product that depends on, but isn't, the open source product. Then other businesses will improve your infrastructure for free and make you more money.
chii 1377 days ago [-]
aka the free-rider problem.

That's why i don't think hardware opensource will work, but that this model "worked" in the software world is a miracle. By "worked", i mean that it exists.

whatshisface 1377 days ago [-]
It's not a free-rider problem if you were the one that released the open source infrastructure, or if you use it, need to change it, and send the patches back in. (That's why the GPL is a good license.)
stickfigure 1378 days ago [-]
it's incredibly difficult to make money in open source

This is true, but in the markets where open source thrives, it's even harder to make money in closed source. One talented programmer might make a new web framework, release it, and gain some fame and/or consulting hours. Good luck selling your closed source web framework, no matter how much VC backing you have.

ddevault 1378 days ago [-]
Making money only writing open source is possible, but complicated and out of scope for this comment.

But writing some open source - making your tools (compilers, linters, runtimes), libraries, frameworks, monitoring, sysops/sysadmin tooling, and so on open source is much more profitable, and that's a huge subdomain of open source out there right now.

BoysenberryPi 1378 days ago [-]
This doesn't really answer my question though. Seems like the other comment to my question is spot on. If you divide all company actions into making money or saving money then open sourcing your toolset is more of a saving money thing as you can get people who aren't on your payroll to contribute to them. That's all well and good but not exactly what I think of when someone says "making money with open source."
phkahler 1378 days ago [-]
That seems like an accounting mindset. Engineering is not a cost center (to be minimized) it's an investment in the future. This view offers a lot more flexibility and options - including cooperation. It also suggests that you may get back more than you put in. Invest wisely of course.
BoysenberryPi 1378 days ago [-]
> It also suggests that you may get back more than you put in

You can also get back significantly less than what you put in.

nostrademons 1378 days ago [-]
For a lot of corporate open-source the goal is to commoditize a complement of your revenue-generating product and hence generate more sales. If you're Elastic, having more companies running ElasticSearch increases the number of paying customers looking for hosted ElasticSearch. If you're RedHat or IBM, having more companies using Linux increases the number of companies looking for Linux support. If you're Google, having more phones running Android increases the number of devices you can show ads on.

Similarly for independent developers. You don't make money off the open-source software itself. You make money by getting your name out there as a competent developer, and then creating a bidding war between multiple employers who want to lock up the finite supply of your labor. The delta in your compensation from having multiple competitive offers can be more than some people's entire salary.

1378 days ago [-]
tlrobinson 1378 days ago [-]
DARPA is also funding ($100M) open-source EDA tools ("IDEAS") and libraries ("POSH"): https://www.eetimes.com/darpa-unveils-100m-eda-project/ (this was 2 years ago, I'm not sure where they're at now)
DHaldane 1378 days ago [-]
They just wrapped up Phase 1 of IDEA and POSH; programs that were explicitly trying to bring FOSS to hardware. There are now open-source end-to-end automated flows for RTL to GDS like OpenRoad.

JITX (YCS18) was funded under IDEA to automate circuit board design, and we're now celebrating our first users.

Great program.

DesiLurker 1378 days ago [-]
I have thought about this at one point and have many friends in the EDA industry. I can say with conviction that you are absolutely right. If you want to imagine a parallel to software, just imagine what would have happened to open source movement if gcc did not existed. that is the first choke point in eda and then there are proprietary libraries for everything. add to this an intentionally inoptimal software designed to maximize the revenue and you get a taste of where we are at this point. IMO the best thing that can happen is some company or a consortium of fabless silicon companies buy up a underdog EDA company and just opensource everything including code/patents/bugreports. I'd bet within a few years we would have more progress than we had in last 30 years.
kickopotomus 1378 days ago [-]
I think the underlying issue here is that IC design is one or two orders of magnitude more complex than software. In my experience, the bar for entry into actual IC design is generally a masters or PhD in electrical engineering. There is a lot that goes into the design of an IC. Everything from the design itself to simulation, emulation, and validation. Then, depending on just how complex your IC is, you have to also think about integration with firmware and everything that involves as well.
DHaldane 1378 days ago [-]
I don't know that hardware is inherently more complex than software.

The issue I see in hardware is that all complexity is handled manually by humans. Historically there has been very little capacity in EDA tools for the software-like abstraction and reuse which would allow us to handle complexity more gracefully.

Ericson2314 1378 days ago [-]
Nope this is just not true.

You can crank out correct HDL all day with https://clash-lang.org/, and the layout etc. work that takes that into "real hardware" while not trivial, are less creative optimization problems.

This is a post-hoc rationalization behind the decrepitness of the (esp Western[1]) hardware industry, and the rampant credentialism that arises when the growth of the moribund rent-seekers doesn't create enough new jobs.

[1]: Go to Shenzhen and the old IP theft and FOSS tradition has merged in a funny hybrid that's a lot more agile and interesting than whatever the US companies are up to.

wrycoder 1378 days ago [-]
That might be true for analog/mixed signal design, but not for CMOS. The design rules are built into the CAD. The design itself is a immense network of switches.

Not having an advanced degree doesn’t mean you can’t master complexity. It’s the same as with software.

kickopotomus 1378 days ago [-]
Same issues apply to digital ICs as well. The design is much more than just the rules encoded into the logic. What size node are you targeting? Is there an integrated clock? What frequency? What I/O does the IC have? What are the electrical and thermal specifications for the IC? Does it need to be low Iq? What about the pinout? What package? How do you want to structure the die? There are a lot of factors involved with determining the answers to these questions and they are highly interdependent.

The advanced degree is not meant to teach you how to grok complexity. It's to teach you what problems you can expect to encounter and how to go about solving them.

wrycoder 1377 days ago [-]
I suggest you won't learn those things in college. You RTFM and get some experience on real life design projects.

Now, if you are in CMOS process design or responsible for the CAD software itself and its design rules, it helps (a lot!) to be a physicist. For that you need the MS and PhD training.

vvanders 1377 days ago [-]
Signal integrity and analog design start to share a lot of similarity as you start pushing around the edges.
wrycoder 1377 days ago [-]
Absolutely. At the bottom, it's all analog!
radicaldreamer 1378 days ago [-]
And the cost for failure is a lot, lot higher... it’s a different model of development because there are many different challenges involved.
neltnerb 1378 days ago [-]
Why is it so high if the fab is free? Intrinsic cost of development time and tooling?
rcxdude 1378 days ago [-]
Time especially (though costs are also high). Even with infinite budget you still have a cycle time of months between tape-out and silicon in hand.
neltnerb 1378 days ago [-]
Ah, you're talking about this as a current-day commercial project. Carry on :-)

I'm a professional PCB designer among other things. Most of what I have learned came from hard-won experience designing PCBs for projects done for myself and my own education even though a custom PCB was hardly cost effective in a commercial sense. They were just much cheaper since my time was free. They weren't useless projects in that what I made was not easy with off the shelf parts, but it would have been possible and there wasn't a timeline to pressure me yet.

But I would not be a professional PCB designer now if it had not been approachable to someone without a budget but with plenty of time and motivation. I've basically spent as much money learning how to make PCBs as other people spend on things like ski trips or going on vacations or other hobbies. A free fab and tools to create designs is a godsend when designing these things is something you want to do for interesting experiments and learning in an otherwise totally inaccessible field. Even if you think now one needs a masters or PhD to do this right, being able to fail cheaply is a pretty amazing learning tool...

That's what this announcement means to me -- free fab means I can finally learn how to do this and get good enough at it that when the time comes that this is a better solution than trying to combine off the shelf functionality I will be well positioned to take advantage of that change.

I am thrilled to release open source designs for cool chip functionality once I'm skilled enough to do it and the only way I'd get there is if the direct cost to me was nothing (even if it's slow).

jiveturkey 1377 days ago [-]
> free fab means I can finally learn how to do this

hate to tell you this, but probably not. there are only 40 slots. those are going to quickly be eaten by people that already know how to do this.

the fab isn't the expensive part anyway. you can get a small run of ASICs done for a few single digit $thousand

neltnerb 1377 days ago [-]
If the fab isn't that expensive, all the more reason to learn how to do it proficiently for free, no? Even if it doesn't get produced it seems like a good learning project.
jiveturkey 1377 days ago [-]
absolutely! but that already existed. (free tools that don't result in something that can be fabbed)

what you would get out of this is access to a google-promoted open source PDK. It's specific to SkyWater so "open source" don't mean a whole lot, not today anyway. The available libraries are very immature. It's clearly a nice promotion for SkyWater, one I am enthusiastic about, but nonetheless not the shimmering beacon you're looking for.

You already know this: time is your most precious commodity. Don't bumble your way through a thinly disguised and immature FOSS offering. Cadence Virtuoso online training is just $3k per seat per year[1]. That said, I'm completely talking out of my rear. I've no idea if the training is actually useful without having a license for Virtuoso, where the webs say license costs are 6 figures per seat. I'd enroll in a university program to get access ... [2]

I just suggest this based on your stated goals, to be able to have enough proficiency to use professionally. Certainly you don't need 10,000 hours, and some of that is amortized by related experience, but time is your #1 enemy here, not money. Any money you can spend to jumpstart things is money well spent. OTOH if you just want to enjoy exploring IC design in a noncommittal hobbyist way, as a learning "experience" ala MasterClass, then this project does seem an excellent starting point.

SkyWater is a "trusted foundry", whatever that means, apparently completely US-based. Therefore, I think the most valuable thing that could ultimately (say 5 years hence) come out of this would be for OpenTitan to be "ported" to ASIC and to the SkyWater process. 130nm is large but for this application, for IoT in general, for anything not a mobile handset, it would be powerful. Imagine a Raptor TALOS workstation with an OpenTitan RoT! Or, your own design PCIe card with your own fabbed and X-Ray verifiable RoT. Powerful.

[1] https://www.cadence.com/content/dam/cadence-www/global/en_US... (slides 3 and 4)

[2] https://www.cadence.com/en_US/home/company/cadence-academic-...

riking 1377 days ago [-]
Heck, it's a whole 10 sq mm. I'm sure they could work something out to combine multiple smaller projects into one slot, muxed by the supervisor.
jiveturkey 1377 days ago [-]
and then you're going to do a secondary saw operation? $$$
rcxdude 1378 days ago [-]
Yeah, it's a lot more like PCB design, just more extreme in cost and time (especially when looking at debugging and rework). I don't think it's particularly more intrinsically difficult for digital circuits, but it's a lot different from the rapid iteration cycles you have from software (which is the main point). Because of the timescales even for hobbyist stuff you want to invest a lot more in verification of your design before you actually build it. Also the amount of resources available for it is dire, even for FPGAs a hobbyist has a much harder time finding useful information compared to software.
georgeburdell 1378 days ago [-]
IC design does not need a masters or PhD, that's just what companies want to hire. The old school guys do not have these accreditations and somehow they're still doing fine.

-Engineering PhD

Treblemaker 1375 days ago [-]
It's not "IC design" that's orders of magnitude more complex. It's "IC implementation" where the complexity lies.

Design for test, Logical verifcation (simulation, logical equivalence checks, electrical design rules), Physical implementation (library development and characterization, floorplanning, place and route, signal integrity), signoff (timing closure, electrical desing rules (again), physical verification, OPC) all require highly complex tools to automate. For large designs you will have people dedicated to each individual step because the ways in which things can go wrong -- and the absolute necessity of things going right to get a working chip -- are legion. And there's no substitute for experience to know the right questions to ask.

It's like the difference between driving your car across the country and flying to orbit. If you make a wrong turn in your car you can just make another turn or backtrack (edit the source code and rebuild). If don't have the right torque on the tank strut bolts on your rocket, "You will not go to space today".

Source: 30 years in the ASIC and EDA industries doing chip implementation and EDA tool flow development.

gchadwick 1378 days ago [-]
> Want a linter for your project? That's going to be $50k

Another interesting open source EDA project coming out of Google is Verible: https://github.com/google/verible which provides Verilog linting amongst other things.

robomartin 1378 days ago [-]
This is a reality that exists in any limited market. Tools like nginx and mysql count their for-profit users in the in the millions. This means that there are tremendous opportunities for supporting development. By this I mean, companies and entities who use the FOSS products to support of their for-profit business in other domains, not directly profiting from the FOSS.

FOSS development isn't cost-less. And so the business equation is always present.

The degree to which purely academic support for open source can make progress is asymptotic. People need to eat, pay rent, have a life, which means someone has to pay for something. It might not be directly related to the FOSS tool, but people have to have income in order to contribute to these efforts.

It is easy to show that something like Linux is likely the most expensive piece of software ever developed. This isn't to say the effort was not worth it. It's just to point out it wasn't free, it has a finite cost that is likely massive.

An industry such as chip design is microscopic in size in terms of the number of seats of software used around the world. I don't have a number to offer, but I would not be surprised if the user based was ten million times smaller than, say, the mysql developer population (if not a hundred million).

This means that nobody is going to be able to develop free tools without either massive altruistic corporate backing for a set of massive, complex, multi-year projects. If a company like Apple decided to invest a billion dollars to develop FOSS chip design tools and give them away, sure, it could happen. otherwise, not likely.

gentleman11 1378 days ago [-]
I am finding game development to be a tiny bit like this also: very little open source, lots of home-made clunky code, lots of NDAs and secrets. Generally, a much worse developer experience with worse tooling overall. To play devils advocate, it this makes game dev harder, which isn’t entirely bad because there is already a massive number of games being made that can’t sell, so it reduces competition a tiny bit. Also, it’s nice to know you can write a plugin and actually sell it. Still, it’s weird. The community in unreal can even be a bit unfriendly or hostile and they will sometimes mock you if you say you are trying to use Linux. Then again, unity’s community is unbelievably helpful
jfkebwjsbx 1378 days ago [-]
Commercial games benefit way less than other code from being open source, since they are very short-lived projects once released.

Further, games would be very easy to copy for users, to develop cheats for multiplayer, to duplicate by competitors, etc.

gentleman11 1378 days ago [-]
The games, sure. But what about the tooling? Behaviour trees, fsm visualizers, debug tools, better linters, multiplayer frameworks, ecs implementations, or even just tech talks. Outside of game dev, there are so many tech talks all the time on every possible subject, sharing knowledge. In game dev, there is GDC and a few others, but it’s just far less common
tpxl 1377 days ago [-]
The closest of multiplayer games die the day they shut down the servers.

The longest lived games are the longest lived because they are open enough for the community to keep them alive.

vvanders 1377 days ago [-]
That depends a lot on platform and engine. X360 development was still one of the best damn environments on dedicated hardware I've ever worked on. PIX was nothing short of amazing and Visual Studio is still a bar I haven't seen touched from and IDE perspective.

Compare that with something like Android where I'm lucky if I can get GDB to hit a single breakpoint on a good day.

marktangotango 1378 days ago [-]
Could a lot of this backwardness also be explained by patents and litigation risk? There are a lot of patents around hardware, seems like there'd be a high chance of implementing something in hardware that is patented without knowing it's patented.
foobiekr 1378 days ago [-]
The whole industry still operates like the 90s right down to license management and terrible tooling. It's one of the few multi-billion dollar industries that was mostly untouched in the dotcom, and is still very old-school today.

The problem is, it's also a very, very tough industry to disrupt. Not for lack of trying though.

stock_toaster 1377 days ago [-]
It’s interesting to consider this from a finance standpoint too. Consider/compare/contrast with open source software ecosystems where large amounts of money that large companies are able to effectively “strip mine” from the ecosystem.

Hopefully there is a middle ground somewhere, where the folks working on open source software can get compensated for their work so as to enable a healthier system overall, so we aren't all just “software serfs” sharecropping for our overlords.

wizzwizz4 1377 days ago [-]
License your work under the AGPL, and they can't do that – at least, without arranging another license with you.
ian-g 1378 days ago [-]
My last job was supporting some hardware companies' design VC. Absolutely insane.

I think it's also a cultural thing. Like you said, lots of your own special secret sauce, and so many issues trying to fix bugs that may have to do with that secret sauce.

Can't say I miss it at all really.

chubot 1377 days ago [-]
Yes 100%. Maybe software is really unique in this regard, and HN and such forums are a gift?

Tangent: I've noticed this problem in dentistry and sleep apnea! The methods of knowledge transfer in the field seem to be incredibly inefficient? Or there are tons of dentists not learning new things? (I recall a few dentists on HN -- I would be interested in any comment either way on this.)

The reason I say this is that many patients can't tolerate CPAP (>50% by some measures). There are other solutions, and judging by conversations with friends and family, dentists and doctors don't even know about them!

----

My dentist gave me one of these alternative treatments for sleep apnea, which was very effective. It's mandibular advancement device (oral appliance). Even the name is bad! They have a marketing and branding problem.

Random: Apparently Joe Rogan's dentist did a similar thing with a different device which he himself invented. Rogan mentioned it 3 times on the air.

So basically it appears to me that practitioners in this field don't exchange information in the same way that software practitioners do (to say nothing of exchanging the equivalent of working code, which is possible in theory).

I looked it up and there are apparently 200K dentists in the United States. It seems like a good area for a common forum. I think the problem is that dentists probably compete too much rather than cooperate? There is no financial incentive to set up a site of free knowledge exchange.

Related recent threads I wrote about this:

https://news.ycombinator.com/item?id=23666639 (related to bruxism, there seem to be a lot of dentists who don't seem to understand this, as I know multiple people with sleep apnea and bruxism who don't understand the connection)

https://news.ycombinator.com/item?id=23435964 (the book Jaws)

brnt 1377 days ago [-]
> mandibular advancement device

Those things are in the basic apnea treatment palet over here (Netherlands), actually more common than PAP machines. Why do you think they are alternative?

chubot 1377 days ago [-]
Well the picture might be totally different in the Netherlands. In the US, CPAP is the most common treatment, even though so many people can't use them (including me).

I've talked to several people who have a CPAP, and they don't even know of the existence of the mandibular advancement device. Their doctors and dentists apparently don't tell them about it!

I'm puzzled why that is the case. I think it has something to do with insurance. If that's true, it's not surprising that other countries don't have the same problem!

m463 1377 days ago [-]
I think it might need a "stone soup" kickstart.
dehrmann 1378 days ago [-]
This sounds like there might be non-trivial gains out there if more people looked at how HDL is compiled to silicon.
techslave 1377 days ago [-]
also have you seen what passes for software in the hardware world? i’m talking about EDA. i haven’t used cadence etc but altium is an trash fire and it’s $5k/seat. below that level the tools are even more atrocious. i’m mystified why someone doesn’t come in and disrupt that space.
madengr 1377 days ago [-]
I don’t know why you say it’s a trash fire. I’ve been using it for 25 years and it works well, granted I don’t do more than 8 layer boards, but they are dense RF.

It will require a team knowing professional software development and professional PCB design to disrupt it. When you get only one of those, you get the trash fire.

aswanson 1378 days ago [-]
Posts like this make me so glad I was directed by market forces out of hardware design into software.
andrepd 1377 days ago [-]
Absolutely. The capitalist conception of closed competition and profit motive is taken virtually as dogma nowadays. However we see its many disadvantages: there are multiple companies ("teams of people") spending billions of dollars and millions of man-hours in largely duplicated efforts which lead in the end to an inferior product that what could be done with cooperation.

Imagine a combined Intel+AMD+Samsung+Nvidia behemoth pooling together their expertise. Internal competition would still exist, but for actual technical reasons now instead of market ones. One could imagine myriad ways to fund such a cooperative endeavour, which are never even tried because the current model is sacred.

1377 days ago [-]
staycoolboy 1376 days ago [-]
You're making an argument that open source is better because it is free. This is a 30 year old argument. The problem is, people need to get paid in the interim. For exmaple, Intel is the biggest contributer to the linux kernel, but without Intel paying its employees by charging for chips, millions of patches would never have made it into Linux. I'm not saying you're wrong, I'm saying it is more nuanced than you are implying.
josemanuel 1378 days ago [-]
The interesting thing with open source is that it devalues the contribution of the software engineer. Your effort and ideas are now worth 0. You either do that work for free, on your spare time, or are paid by some company to write software that is seen, by the company paying you, as a commodity. Open source is at the extreme end of neoliberalism. It is really a concept from the savage capitalist mentality of the MBA bots that run the corporate world. They certainly love open source.
x0 1377 days ago [-]
Companies love open source projects with MIT-style licenses. If you license your project GPL, no company will touch it, unless they really, really have to.
est31 1378 days ago [-]
This is amazing.

I think the main reason why open source has taken off is because access to a computer is available to many people, and as cost is negligible, it only required free time and enough dedication + skill to be successful. For hardware though, each compile/edit/run cycle costs money, software often has 5-digit per seat licenses, and thus the number of people with enough resources to pursue this as a hobby is quite small.

Reduce the entry cost to affordable levels, and you have increased the number of people dramatically. Which is btw also why I believe that "you can buy 32 core threadripper cpus today" isn't a good argument to ignore large compilation overhead in a code base. If possible, enable people to contribute from potatoes. Relatedly, if possible, don't require gigabit internet connections either, so downloading megabytes of precompiled artifacts that change daily isn't great either.

mhh__ 1378 days ago [-]
Not only is the software expensive it's often crap. By which I don't mean, oh no it doesn't look nice - crap as in productivity-harming.

For example, Altium Designer is probably the most modern (not most powerful although close) PCB suite and yet despite costing thousands a seat it is a slow, clunky, single-threaded (in 2020) program (somehow uses 20% of a 7700k at 4.6GHz with an empty design). Discord also thinks that Altium Designer is some kind of Anime MMO

pantalaimon 1378 days ago [-]
KiCad nightly can now import Altium Design files, might want to give it a try ;)

https://kicad-pcb.org/blog/2020/04/Development-Highlight-Alt...

mhh__ 1378 days ago [-]
KiCad is getting very good but it requires a lot of work to compete with the big boys - for example there's no signal integrity built in, and impedance control is fairly detached from your board i.e. I don't think you can do RC on impedance control yet. I don't need a huge amount but signal integrity is fairly important for the project I'm designing.
imtringued 1378 days ago [-]
From what I can tell a lot of parametric design software is also single threaded. I felt like this is was an opportunity where usage of multiple cores could make Freecad stand out a little bit. Except Freecad uses opencascade as their kernel and they require you to sign a CLA just to download the git repository. Considering that barrier to just cloning the code I just decided to not contribute anything. They do offer zip file downloads of the source code but at that point I lost interest.
nybble41 1378 days ago [-]
> Except Freecad uses opencascade as their kernel and they require you to sign a CLA just to download the git repository.

    git clone https://git.dev.opencascade.org/repos/occt.git
It's not well-advertised, but they do offer public read-only HTTP access to the git repository.[1] This URL really should be listed on the Resources page as well as the project summary in GitWeb.

[1] https://dev.opencascade.org/index.php?q=node/1212#comment-86...

phkahler 1378 days ago [-]
SolveSpace now has some code paths multithreaded. It's not clear if this will make the next release but you can build from source with -fopenmp.

Like you say, it's kind of shocking to see one core running at 100 percent while the rest do nothing and the app is sluggish in 2020.

CompAidedPoster 1378 days ago [-]
I suspect geometric kernels and 2D/3D renderers don't fall into the "easy to parallelize" category. Of course there are functions that use multiple threads, but it's not obvious how you could build the core system to do so. However the code in CAD software is often pretty old, it wasn't that long ago that many of these still used intermediate mode OpenGL and I wouldn't be surprised if some still do.

In the same vein something like ECAD tools don't use GPU-accelerated 2D rendering but instead use GDI and friends (which used to be HW-accelerated, but isn't since WDDM/Vista).

A lot of "easy" opportunities to improve UX and productivity.

jschwartzi 1378 days ago [-]
It seems like it depends a lot on your representation of the circuit network. If you consider each trace and PCB element as a node in a graph which maps the connections of the traces and PCB elements then you could parallelize provided you can describe the boundary conditions at each node. There's a degree to which they're interdependent, but there are also nodes at which the boundary condition is effectively constant and I think those would make good cut points for parallelization.
stjo 1378 days ago [-]
> Discord also thinks that Altium Designer is some kind of Anime MMO

Hardly Altium Designer's fault, but I too would avoid using it.

swiley 1378 days ago [-]
I thought xpcb/gschem were decent although admittedly I’ve only ever tried PCB design once.
fhssn1 1378 days ago [-]
I believe you're talking about the EDA toolchain.

Even though it has a long history of open-source attempts, as pointed out by Tim in his presentation, they are few and far between, and massively underwhelming compared to the thriving open source software community.

However, if this initiative takes off, it'll be a big help in creating an open source EDA toolchain community.

gchadwick 1378 days ago [-]
> However, if this initiative takes off, it'll be a big help in creating an open source EDA toolchain community.

The opensource EDA toolchain community is already producing some good stuff, Symbiflow: https://symbiflow.github.io/ is a good example, it's an open source FPGA flow targeting multiple devices. It uses Yosys (http://www.clifford.at/yosys/) as a synthesis tool which is also used by the OpenROAD flow: https://github.com/The-OpenROAD-Project/OpenROAD-flow which aims to give push-button RTL to GDS (i.e. take you from Verilog, which is one of the main languages used in hardware to the thing you give to the foundry as a design for them to produce).

The Skywater PDK is a great development, which is a key part of a healthy opensource EDA ecosystem though there's plenty of other great developments happening in parallel with it you will note there's some people who are involved in several of these projects they're not all being developed in isolation. The next set of talks on the Skywater PDK include how OpenROAD can be used to target Skywater: https://fossi-foundation.org/dial-up/

hamandcheese 1378 days ago [-]
Very similar to what you just said, I suspect that a driving factor in the state of open source in hardware is that anyone working in hardware almost be definition has a large corporate backing, since producing hardware is so capital intensive (compared to software).

If that is basically a given, why publish anything for free, when you can instead charge 10k/seat in licensing?

devit 1378 days ago [-]
Possibly because initially the open source software will be significantly worse than the proprietary software and thus won't get any sales, and it will only be better with a lot of contributions, but then it's already freely available and so it still won't get any sales (but might get support/SaaS contracts).
andy_ppp 1378 days ago [-]
Sounds like there should be open source software for such a thing? I bet the software for laying out transistors and so on will suddenly become viable with something like this, good idea Google!
orbifold 1378 days ago [-]
There is open source software, a good overview is on http://opencircuitdesign.com/qflow/index.html.
andy_ppp 1378 days ago [-]
Ripe for some innovation maybe...
leojfc 1378 days ago [-]
Strategically, could this be part of a response to Apple silicon?

Or put another way, Apple and Google are both responding to Intel/the market’s failure to innovate enough in idiosyncratic manner:

- Apple treats lower layers as core, and brings everything in-house;

- Google treats lower layers as a threat and tries to open-source and commodify them to undermine competitors.

I don’t mean this free fabbing can compete chip-for-chip with Apple silicon of course, just that this could be a building block in a strategy similar to Android vs iOS: create a broad ecosystem of good-enough, cheap, open-source alternatives to a high-value competitor, in order to ensure that competitor does not gain a stranglehold on something that matters to Google’s money-making products.

Nokinside 1378 days ago [-]
These are not related at all. Only common element is making silicon.

Apple spends $100+ millions to design high performance microarchitecture to high-end process for their own products.

Google gives tiny amount of help to hobbyists so that they can make chips for legacy nodes. Nice thing to do, nothing to do with Apple SoC.

---

Software people in HN constantly confuse two completely different things

(1) Optimized high performance microarchitecture for the latest prosesses and large volumes. This can cost $100s of millions and the work is repeated every few years for a new process. Every design is closely optimized for the latest fab technology.

(2) Generic ASIC design for process that is few generations old. Software costs few $k or $10ks and you can uses the same design long time.

_bxg1 1378 days ago [-]
> Nice thing to do

I don't believe Google does anything because it's a "nice thing to do". There's some angle here. The angle could just be spurring general innovation in this area, which they'll benefit from indirectly down the line, but in one way or another this plays to their interests.

sangnoir 1377 days ago [-]
> I don't believe Google does anything because it's a "nice thing to do".

If only Google had this singular focus... From my external (and lay) observation - some Google departments will indulge senior engineers and let them work on their pet projects, even when the projects are only tangentially related to current focus areas.

Looking at Google org on Github (https://github.com/google); it might be a failure of imagination on my part, but I fail to see an "angle" on a good chunk of them.

sixothree 1377 days ago [-]
Google has never created a product that does not collect data in a unique manner apart from its other products.
ladyanita22 1377 days ago [-]
They must be some kind of genius. I don't see how are they going to be able to extract personal information out of here.
sixothree 1377 days ago [-]
They're not doing this out of the kindness of their heart. Just because we don't know the data being collected here (yet) does not invalidate my statement. Name a google product and you can easily identify the unique data being collected.
emiliobumachar 1377 days ago [-]
Not necessarily personal. Maybe training a robot to design circuits?
jagger27 1378 days ago [-]
> few generations old

And by old, I mean /old/. 130 nm was used on the GameCube, PPC G5, and Pentium 4.

denkmoon 1377 days ago [-]
Think of all the chips from then and before then that are becoming rare. The hobbyist and archivist community do their best with modern replacements, keeping legacy parts alive, and things like FPGAs, but to be able to fab modern drop in replacements for rare chips would be amazing.

Things don't have to be ultra modern to offer value.

yummypaint 1378 days ago [-]
That's not terribly long ago, really. My understanding is that a sizeable chunk of performance gains since then have come from architectural improvements.
zrm 1378 days ago [-]
Probably the fastest processor made on 130nm was the AMD Sledgehammer, which had a single core, less than half the performance per clock of modern x64 processors, and topped out at 2.4GHz compared to 4+GHz now, with a die size basically the same as an 8-core Ryzen. So Ryzen 7 on 7nm is at least 32 times faster and uses less power (65W vs. 89W).

You could probably close some of the single thread gap with architectural improvements, but your real problems are going to be power consumption and that you'd have to quadruple the die size if you wanted so much as a quad core.

The interesting uses might be to go the other way. Give yourself like a 10W power budget and make the fastest dual core you can within that envelope, and use it for things that don't need high performance, the sort of thing where you'd use a Raspberry Pi.

snovv_crash 1377 days ago [-]
You wouldn't get access to ASIC fab just to make a CPU. Fill it with tensor cores, or fft cores, plus a big memory bus. Put custom image processing algorithms on it. Then it will be competitive with modern general silicon despite the node handicap.
yummypaint 1378 days ago [-]
Your suggestion was more what i was thinking, perhaps something more limited in scope than a general processor. An application that comes to mind is an intentionally simple and auditable device for e2e encryption.
pflanze 1378 days ago [-]
My understanding is that architectural improvements (i.e. new approaches to detect more parts in code that can be evaluated at the same time, and then do so) need more transistors, ergo a smaller process.

(Jim Keller explains in this interview how CPU designers are making use of the transistor budget: https://youtu.be/Nb2tebYAaOA)

janekm 1378 days ago [-]
My first reaction was that it could be a recruitment drive of sorts to help build up their hardware team. Apple have been really smart in the last decade in buying up really good chip development teams and that is experience that is really hard to find.
baybal2 1378 days ago [-]
> Apple have been really smart in the last decade in buying up really good chip development teams and that is experience that is really hard to find.

They can outsource silicon development. Should not be a problem with their money.

In comparison to dotcom development teams, semi engineering teams are super cheap. In Taiwan, a good microelectronics PhD starting salary is USD $50k-60k...

ethbro 1378 days ago [-]
Opportunity cost, though.

Experienced teams who have designed high performance microarchitectures aren't common, because there just isn't that much of that work done.

And when you're eventually going to spend $$$$ on the entire process, even a 1% optimization on the front end (or more importantly, a reduction of failure risk from experience!) is invaluable.

pjc50 1378 days ago [-]
Does Google have a silicon team?
trsohmers 1378 days ago [-]
As of a year and a half ago they had over 300+ people across Google working on silicon (RTL, verification, PD, etc) that I’m aware of.
daanluttik 1378 days ago [-]
They created TPU's right? So somewhere inside the alphabet group they must have some expertise
shagie 1377 days ago [-]
It wouldn't surprise me. They've been designing custom hardware for some time. Look at the Pluto switch and the "can we make something even more high performance" or "we can make it simpler, cheaper, more specialized and save some watts" (which in turn saves on power for computing and power for cooling costs).

At the scale that Google is at, it really wouldn't surprise me if they were working on their own silicon to solve the problems that exist at that scale.

jeffbee 1377 days ago [-]
Pluto is merchant silicon in a box, like all their other switches.

"""Regularly upgrading network fabrics with the latest generation of commodity switch silicon allows us to deliver exponential growth in bandwidth capacity in a cost-effective manner."""

https://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p183...

shagie 1377 days ago [-]
I wasn't intending to claim that Pluto is custom silicon but rather that Pluto is an example of Google looking for simplicity, more (compute) power, and less (electrical) power.

The next step in that set of goals for their data center would be custom silicon where merchant silicon doesn't provide the right combination.

harpratap 1378 days ago [-]
Manu Gulati - a very popular Silicon Engineer who worked at Apple left for Google. (He now works at Nuvia with other ex-Apple stalwarts)
orbifold 1378 days ago [-]
They have Norman Jouppi, he apparently was involved in the TPU design.
lrem 1377 days ago [-]
What are TPUs and quantum computers made of? ;)
amelius 1378 days ago [-]
Joel Spolsky calls this "Commoditizing your complement".
MiroF 1378 days ago [-]
I'm guessing GP was clearly referencing that phrase, not unaware of it.
andy_ppp 1378 days ago [-]
I mean someone else said the software to design chips is 5 figures per seat so probably a multi billion dollar industry.

My guess would be a cloud based chip design software is in the works. This would accelerate AI quite a bit I should think?

londons_explore 1378 days ago [-]
More like 6 figures per seat...

It's actually a big part of why some silicon companies distribute themselves around timezones - so someone in Texas can fire up the software immediately when someone in the UK finishes work.

It's not unusual to see an 'all engineering' email reminding to you close rather than minimize the software when you go to meetings.

madengr 1378 days ago [-]
I thought most EDA companies put a stop to that with geographic licensing restrictions.
baybal2 1378 days ago [-]
And this is the reason some companies have shift work...

But that all means nothing for companies who buy Virtuoso copies from from guys trading WaReZ in pedestrian underpasses in BJ.

A number of quite reputable SoC brands here in the PRD are known to be based on 100% pirated EDAs.

This is not a critique, but a call to think about that a bit.

In China, you can spin-up a microelectronics startup in under $1m, in USA, you will spend $1m to just buy a minimal EDA toolchain for the business.

Allwinner famously started with just $1m in capital, when disgruntled engineers from Actions decided to start their own business.

TedDoesntTalk 1378 days ago [-]
What is PRD? I’m guessing a country acronym?
baybal2 1378 days ago [-]
Pearl River Delta
H_Pylori 1378 days ago [-]
>A number of quite reputable SoC brands here in the PRD are known to be based on 100% pirated EDAs.

Not cool man, not cool.

klhugo 1378 days ago [-]
Absolutely not. "Apple Silicon" is branding for their own processor. This is a road to an opensource ecosystem in HW design.
sukilot 1378 days ago [-]
That's the same thing parent said, so "Absolutely yes".
timerol 1378 days ago [-]
TFA doesn't really summarize what's available very well, so let me take a shot from a technical perspective:

- 130 nm process built with Skywater foundry - Skywater Platform Development Kit (PDK) is currently digital-only - 40 projects will be selected for fabrication - 10 mm^2 area available per-project - Will use a standard harness with a RISC-V core and RAM - Each project will get back ~100 ICs - All projects must be open source via Git - Verilog and gate-level layout

I'm curious to see how aggresive designs get with analog components, given that they can be laid out in the GDS, but the PDK doesn't support it yet.

raverbashing 1378 days ago [-]
I'm thinking 10mm^2 so kinda 3.3mm x 3.3mm. In 130nm

(Remember 130nm gave us the late models Pentium 3s, the second model P4s and some Athlons, though all of these had a bigger die size)

I'm thinking you could have some low power or very specific ICs, where these would shine as opposed to a generic FPGA solution

moonchild 1378 days ago [-]
> 10mm^2 so kinda 3.3mm x 3.3mm

3.16mm x 3.16mm?

raverbashing 1377 days ago [-]
Square root by head is hard ;)
DCKing 1378 days ago [-]
From a hobbyist and preservation perspective, it would be cool if this could be used to produce some form of the Apollo 68080 core to revive the 68k architecture a little bit, and build out the Debian m68k port [0][1]. The last "big" 68k chips were produced in 1995 (that would be a 350nm process?) so this could be hugely improved on 130nm. The 68080 core is currently implemented in FPGAs only and is already the fastest 68k hardware out there. With a real chip, people could continue upgrading their Amigas and Ataris.

[0]: http://www.apollo-core.com/ I can't easily find how "open source" it is though, but it's free to download.

[1]: https://news.ycombinator.com/item?id=23668057

herio 1378 days ago [-]
The Apollo core is not open source.

There are some other pretty nice and featured 68k cores that are open source (TG68, WF68K30L etc.) but none that is really close to the features and performance of the Apollo 68080.

armitron 1378 days ago [-]
Note that there have been IP theft and other shadiness allegations from ex-members of the Apollo team.

If you do a little research you'll find out that there's plenty of "stay away" and "can't believe they haven't been sued into oblivion yet" indicators and all sorts of misleading claims and marketing.

My summary would be that it's a tightly-controlled, closed project lead by questionable people with well-documented histories of questionable practices including ignoring copyrights, distributing infringing software, deleting critical posts from their forums and putting out misleading information.

DCKing 1377 days ago [-]
It seems I have a little bit more to learn on this project. Are there any sources I could read?
DCKing 1378 days ago [-]
Ah that's a shame. I suppose this could be used to perform a revival of 68k without the Apollo core, but it's a shame that the engineering effort already there would not be available. Maybe this would be an incentive for them to open source it, but yeah.
phonon 1378 days ago [-]
moftz 1378 days ago [-]
The PowerPC 7457 was built on a 130nm process, used in the AmigaOne XE as well as the Apple G4 machines. That's probably about as good as you will get for an community-led chip fabrication. You could probably get it up to over 1GHz if you made a 68k at this size. A modern FPGA is probably the better way to go for this kind of thing. I doubt this free fab includes things like die testing and packaging. That's an expensive process so someone would need to front some money to actually get the testing and packaging done for enough chips to make the cost actually worth it. It would be much cheaper to design an interposer board that could plug into a motherboard to take the place of the original 68k. This would also allow you to continuously upgrade the processor without requiring a whole new fab to take place.
perl4ever 1377 days ago [-]
>You could probably get it up to over 1GHz if you made a 68k at this size

That would be a fun upgrade if I had an old 68k Mac in good condition. I've thought occasionally about what if Motorola had had the resources to continue developing 68k like x86.

cmrdporcupine 1377 days ago [-]
Coldfire is what they would have, and did, make. By dropping a few problematic instructions they were able to use a more RISC-like approach and produce a nicer core. Most code just needs a recompile, or you can trap the old instructions and work around them.

I have a "Firebee", an Atari ST-like machine built around a 264mhz coldfire, and it's quite nice.

My understanding is that Coldfire got used in a lot of networking hardware, due in part to its network order endianness, and partially because Freescale put network hardware support into some of the cores.

But there is no continuing demand for Coldfire, really, so it has stalled, NXP never continued on with it and is going the ARM route now, like everyone else.

rwmj 1378 days ago [-]
How fast can those FPGAs be clocked? Is it better to have a free but small run of 68k ASICs which might have similar performance, or the potential to run a soft core on off-the-shelf FPGAs, at much higher cost per unit, but with the ability to rapidly iterate on the design?
pclmulqdq 1378 days ago [-]
The soft core approach has many advantages, but FPGA companies have dropped the ball on single-unit (hobbyist) sales.

Chips that cost $1000 from a distributor cost 1/10th to 1/100th the price when you have a relationship with the manufacturer, mostly because distributors can't sell them very quickly and have to keep a ton of stock to have the SKUs you want.

On a modern FPGA, processor clocks of 200-300 MHz are possible to get with designs that aren't huge.

LargoLasskhyfv 1378 days ago [-]
http://www.myirtech.com/list.asp?id=630 looks nice for something under 300$ AND 4GB RAM, but i think the embedded quad-core ARM is still faster than anything you can 'emulate' on the FPGA.
DCKing 1378 days ago [-]
The Apollo project here would be particularly suitable as they have already iterated on the design using FPGAs. The chip is already working as an FPGA and bringing tangible improvements: I'm assuming a 130nm ASIC version would be even better.
rwmj 1378 days ago [-]
I'm not sure that assumption is necessarily right. As a general guide, on a cheap (sub $200) modern FPGA I can clock an RV64 core at 50-100 MHz. As you spend more on the FPGA, you can get higher clock rates and/or more cores. Also it should be possible to clock 32 bit cores higher (perhaps much higher) because there will be fewer data paths for internal routing to skew. On the other hand, modern RISC architectures are designed for this, whereas old 68k architectures may not be.
cmrdporcupine 1378 days ago [-]
I had no problem running PicoRV32 at 50mhz (maybe higher... 75mhz? can't recall, at that point I had other issues that might not have been CPU related) on an Artix 7 35t.

Honestly instead of chasing after new 68k silicon it'd be better to just emulate on a modern processor. Not the same romance, I know....but

pjc50 1378 days ago [-]
What happened to freescale/NXP "Coldfire"?
herio 1378 days ago [-]
ColdFire is still around but is also not fully binary compatible with 68k. There have been attempts at making Amiga accelerator cards using Coldfires, but I don't think I've ever seen one that was fully finished.
cmrdporcupine 1377 days ago [-]
It's not fully binary compatible but pretty damn close. People working on the Firebee were able to make it run pretty smooth, there's some things you can do to trap the old instructions and rewrite. It's never going to work for games and the like, but games and the like from that era have all sorts of other video hardware specific dependencies that are even more difficult to satisfy.

It really is "spiritually" a 68k series processor, just with some cleaning up. I like it.

DCKing 1378 days ago [-]
I am really not an expert in 68k but the Coldfire does not appear to be fully compatible with the old 68ks used in old Macs and Amigas, and Googling around it doesn't appear to have had much uptake if any. It's not being made anymore either.
CompAidedPoster 1378 days ago [-]
Coldfires are definitely still in production.
xeeeeeeeeeeenu 1377 days ago [-]
FireBee (http://firebee.org), which is a modern clone of Atari ST, uses it.
cmrdporcupine 1378 days ago [-]
No new Coldfire processors made in years, unfortunately. Freescale/NXP seems to be just leaving it.
pantalaimon 1378 days ago [-]
I guess everyone is just using ARM now.
phire 1378 days ago [-]
You would have problems with licensing the 68k ISA.

I believe freescale currently owns the architecture, and still manufactures some 68k microcontroller cores.

nullc 1378 days ago [-]
> I believe freescale currently owns the architecture,

Owns it how? 68060-- the last of 68k's designs-- was released in 1994. Any patents should now be expired.

webmaven 1377 days ago [-]
> > I believe freescale currently owns the architecture,

> Owns it how? 68060-- the last of 68k's designs-- was released in 1994. Any patents should now be expired.

Sure, patents wouldn't be a barrier to clone the design and create an equivalent using the same patented ideas, but copyright still prevents you from copying the design, and will prevent copying significant parts of the design as well.

colejohnson66 1377 days ago [-]
In other words: you can rearchitect it from scratch, but you probably can’t extract the die.
webmaven 1375 days ago [-]
"Probably" doesn't seem strong enough. I mean, the only thing that would give you any hope of avoiding being sued out of existence leaving nothing but a small greasy spot behind would be obscurity and commercial irrelevance.
DCKing 1378 days ago [-]
Interesting thing to consider. I wonder how actively people want to protect 68k, as not even Freescale/NXP seems to use it anymore.

Shouldn't that already be problematic for the 68k projects in hardware through FPGAs? Apollo already does it and sells hardware, and the MiSTER project also does it by releasing FPGA designs for e.g. the Sega Genesis which has a 68k processor. Is it a different story if you embed 68k in an ASIC?

afwaller 1378 days ago [-]
Texas Instruments still sells graphing calculators with 68k processors (TI-89 series, most commonly)
monocasa 1378 days ago [-]
All the patents of all the non-Coldfire cores have expired, which is the mechanism for enforcing ownership over ISAs.
CompAidedPoster 1378 days ago [-]
68060 - 600 nm
d_tr 1378 days ago [-]
I believe that a lot of people here might be interested in "Minimal Fab", developed by a consortium of Japanese entities.

These are kiosk-sized machines that a company can use to set up a fab with a few million dollars. Any individual can then design a chip and have it fabricated very (as in "I want to make a chip for fun") affordably.

I was not able to find a ton of information on this, but the 190nm process was supposedly ready last year and there were plans to go below this. The wafers are 12mm in diameter (so basically, one wafer -> one chip) and the clean room is just a small chamber inside the photolithography machine. There are also no masks involved, just direct "drawing".

patwillson22 1378 days ago [-]
My advice to anyone who's looking for a pathway into open source silicon is to look into E-Beam lithography. Effectively E-Beam lithography involves using a scanning electron microscope to expose a resist on silicon. This process is normally considered to slow for industrial production but it's simplicity and size make it ideal for prototyping and photo mask production.

The simplistic explanation for why this works is that electron beams can be easily focused using magnetic lenses into a beam that reaches the nano meter level.

These beams can then be deflected and controlled electronically which is what makes it possible to effectively make a cpu from a cad file.

Furthermore, It's very easy to see how the complexity of photolithography goes up exponentially as we scale down.

Therefore I believe it makes sense to abandon the concept of photolithography entirely if we want open souce silicon. I believe that this approach offers something similar to the sort of economics that enable 3D printers to become localized centers of automated manufacturing.

I should also mention that commercial E-beam machines are pretty expensive (something like 1-Mil) but that I dont think it would be that difficult to engineer one for a mere fraction of that price.

namibj 1378 days ago [-]
I suggest you take a look at how easy maskless photolithography is: https://sam/zeloof.xyz

Theoretically it should be feasible to fab 350 nm without double-patterning by optimizing a simple immersion DLP/DMD i-line stepper.

I think ArF immersion with double-patterning should be able to do maskless 90 nm.

kanwisher 1377 days ago [-]
fixing the url http://sam.zeloof.xyz/
0xcde4c3db 1377 days ago [-]
> I should also mention that commercial E-beam machines are pretty expensive (something like 1-Mil) but that I dont think it would be that difficult to engineer one for a mere fraction of that price.

I'm not sure where it was, but I remember a seeing a project where someone made a rudimentary homebrew electron microscope by chemically etching the tungsten filament from a light bulb (to get the tip sharp enough) and attaching it to a piezo buzzer that was scored to separate it into four quadrants. The filament could be moved by applying various combinations of voltage to the piezo quadrants.

I didn't find the one I was thinking of (which I think was ca. 2002 and so maybe just vanished by now), but search results suggest that variations of this have been done by several people.

ynfnehf 1377 days ago [-]
That sounds like a pretty standard STM (scanning tunneling microscope). They had a couple of those at the university I went to. We had to cut the tips ourselves with pliers, which was a quite annoying process as you couldn't see if they were sharp enough by eye (the tips were supposed to be only a few atoms thick). They seemed pretty cheap to construct, but they are not the same thing as a scanning electron microscope.
wallacoloo 1377 days ago [-]
As far as democratizing hardware goes, I wonder if silicon is the wrong place to start.

Decades ago computers used magnetic core memory. Those things operate on a macroscopic/classical physics level. You can make a core memory by hand if you buy the ferrous toroid first. But moreover, you mention 3d printers — it’s probably possible to manufacture the toroid on a sub-10k machine these days, be it a 3d printer or CNC machine. Some of these techniques generalize to multiple materials, meaning you could automate both the manufacturing of the toroid and the wires connecting them (and the assembly) and have an actual open-source, easily fabricated memory.

One thing not a lot of people know is that you can create clock-triggered combinatorial logic out of core memories just by routing the wires differently. So you’ve got your whole computation + volatile memory + non-volatile memory built on the same process using just two materials and at a macroscopic scale (think: millimeters). That sounds easier to bring up than silicon.

Yeah, macro-scale has its limitations (speed; power draw!), but it’s still enough to enable plenty of applications, and with room to scale it as the tech gets better.

why_only_15 1378 days ago [-]
How much has the power efficiency improved between 130nm and 7nm? Is it plausible to get better performance/watt for a custom chip on 130nm vs a software application running on a 7m chip? I get that hardware has other benefits but just wondering for accelerators where the cost/benefit starts to make sense.
pjc50 1378 days ago [-]
> Is it plausible to get better performance/watt for a custom chip on 130nm vs a software application running on a 7m chip?

This very, very much depends on what the algorithm is (integer or FP? how data dependent?), but I would say no for almost all interesting cases.

The only exception would be if you're doing a "mixed signal" chip where some of the processing is inherently analogue and you can save power compared to having to do it with a group of separate chips.

Another exception might be low leakage construction, because that gets worse as the process gets smaller. This is only valuable if your chip is off almost all of the time and you want to squeeze down exactly how many nanoamps "off" actually consumes.

awelkie 1378 days ago [-]
An open source WiFi chip would be super cool. I wonder how easy it would be to take the FPGA code from openwifi[0] and combine it with a radio on the same chip?

[0] https://github.com/open-sdr/openwifi

pjc50 1378 days ago [-]
The problem is that analogue IC design is a field that even digital IC design people regard as black magic. It's clearly possible for that to happen but the set of people who have the skills to do it is very narrow and most of them are probably prevented from doing it in their spare time by their employment agreements.

I wonder how many "test chips" Google will let a non-expert team do to get it right? And whether they provide any "bringup" support?

Taek 1378 days ago [-]
A big part of the "black magic" really comes down to insufficient tooling. And at least in hardware, insufficient tooling comes down to the fact that everything is open source and trade secret, and teams pretty much refuse to share knowledge with each other.

An open source community would go a long way to fixing an issue like this, and these "black magic" projects are actually a fantastic place for the open source world to get started, because it's an area where there's a ton of room for improvement over the status quo.

monocasa 1378 days ago [-]
They're only allowing parts that stay within the bounds of the PDK (which only allows digital designs) for now.
madengr 1378 days ago [-]
How does the PDK limit it to digital? Unless they are limiting you logic cells and not allowing scaled transistors.
yjftsjthsd-h 1378 days ago [-]
Even if you could technically make it work, I'd be very nervous around the legalities of that. Or is the Wi-Fi spectrum so unregulated that you can run without any certification at all?
manquer 1378 days ago [-]
Certification has to do power of the signal and frequency. licensing is not required in some frequency bands like in 2.4 GHz used by WiFi.
ac29 1377 days ago [-]
WiFi equipment (and pretty much every other radio) requires certification in order to be sold in every country I am aware of. WiFI doesn't require a license to operate, but that doesn't mean you can just use any hardware you like (though I think there may be exceptions for hardware you build yourself, at least in the US).
baybal2 1378 days ago [-]
> Another exception might be low leakage construction, because that gets worse as the process gets smaller. This is only valuable if your chip is off almost all of the time and you want to squeeze down exactly how many nanoamps "off" actually consumes.

No, you actually have more leakage at older nodes, what changes is the ratio of current spent on leakage vs. current spent doing something useful.

MayeulC 1378 days ago [-]
Doesn't leakage increase again below 22nm because of tunneling losses, though?

Of course, the lower gate capacitance allows for lower switching losses. But adiabatic computing could theoretically recover switching losses, allowing for higher efficiency at older nodes. That can be approached by using an oscillating power supply for instance, to recover charges. If someone was to design something like this for this run, it could be very interesting.

Now I'm wondering if this isn't some covert recruitment operation by Google: they will likely comb trough application, select the most promising ones, and the designers will get job offers :)

baybal2 1378 days ago [-]
> Doesn't leakage increase again below 22nm because of tunneling losses, though?

You have tunnelling losses on bigger nodes as well, they are just not that dominant. Dielectrics got better as nodes shrank, and this is the reason FinFETs became practical (which switch faster, and more reliably on smaller nodes, but leak worse.)

6nf 1378 days ago [-]
You won't be able to profitably mine Bitcoin on 130nm ASICs (just as an example)

130nm is almost 20 years old at this point. You can do amazing things with this process but saving power is probably not one of them.

garmaine 1378 days ago [-]
But as an example, you WOULD be able to profitably mine bitcoin on 130nm ASICs if all the rest of the world had was CPUs/GPUs/FPGAs, which was more what the grandparent post was asking: 130nm hardware implementations can be much, much faster and/or energy efficient than a 7nm general-purpose chip which simulates the algorithm.
1378 days ago [-]
Taek 1378 days ago [-]
I wasn't able to find great specifications for the 130nm process, but it looks like the difference in transistor size and efficiency is somewhere around 100x. For specialized applications, going from a CPU to an ASIC is usually around a 1000x performance gain.

So yes, for specific tasks like crypto operations or custom networking, you should be able to make a 130nm ASIC that is going to outperform a 7nm Ryzen. You are not going to be able to make a CPU core that's going to outperform a Ryzen however.

rasz 1378 days ago [-]
130nm was good enough for 2GHz 30W CPUs back in the day. We are talking almost decoding 1080@30 h264 in software performance.
microtherion 1378 days ago [-]
I suspect, however, that the gap between designs that are realizable for amateurs with limited training, and the ones that are realizable for professional teams is wider than in software.

So somebody like me, who did two standard cell based ASICs 25 years ago, probably would have to add a sizable safety margin to produce a reliable chip, and would achieve nowhere near the performance of a pro team at the time.

neltnerb 1378 days ago [-]
I would definitely be rather interested in learning how to design some chips with feature sizes large enough for power handling... I'd love to hear about this as well. This sounds like a clever way to commoditize hardware design, like when printing PCBs became affordable.
hristov 1378 days ago [-]
Depending what application you have but if you have a relatively narrow and complex application, I would say definitely yes.
xvilka 1378 days ago [-]
I should note there's open source ASIC toolchain - OpenROAD[1][2]. I wonder if these can be integrated. You also can use SymbiFlow to run your prototype in FPGA[3][4].

[1] https://theopenroadproject.org/

[2] https://github.com/The-OpenROAD-Project/OpenROAD

[3] https://symbiflow.github.io/

[4] https://github.com/SymbiFlow

nihil75 1378 days ago [-]
Spot on. Both are discussed by Tim in the video as part of the solution stack.
madushan1000 1378 days ago [-]
These are already (sort of) integrated. Skywater PDK's primary target is open source EDA flows, commercial flows are they secondary target.
StillBored 1378 days ago [-]
Because apparently no one remembers the other "free" fab service.

https://www.themosisservice.com/university-support

Previously MOSIS would run select a few student/research designs to go along with the commercial MPW runs, frequently on pretty modern fabs. I'm not really sure how much they still run.

(oh here is the MOSIS/TSMC runs for this year https://www.mosis.com/db/pubf/fsched?ORG=TSMC)

dasudasu 1377 days ago [-]
But this one is open to hobbyists. If you're doing a graduate degree in ASIC design and your group doesn't have the funds to do simple fab runs, you're probably in a somewhat questionable program to begin with.
kingosticks 1378 days ago [-]
> All open source chip designs qualify, no further strings attached!

Surely they have some threshold requirements that the thing actually works? How is this going to work? I mean, if there's no investment required from me, what's the incentive for me to verify my design properly? What's the point in them fabbing a load of fatally bugged open-source designs?

jsnell 1378 days ago [-]
All open source designs qualify, doesn't mean they get selected :/ If you look at the slides, they say that each run will be 40 designs, and they'll do one run this year and multiple next. Criteria for how they'll choose if there are more than 40 applicants TBD.
dwild 1378 days ago [-]
I'm pretty sure they'll expect a tiny bit of QA over an FPGA before fabbing them.

I don't think they care much about what comes out of it though, whether it's "bugged open-source designs" or not, for sure they want less of the bugged one, but the end-goal isn't the projects, it's the people behind theses projects. Google wants more people that design chips, and then recruit them. This just goes on recruitment cost. They may be interested in the open source part of it, but as soon as they'll stop paying for it (and I'm pretty sure it's not going to stay for long, they mention up to 2021), the state of open source chip design will come back to the current status.

moring 1378 days ago [-]
With the PDK being open, does anyone know if any kind of NDAs are still required to get a chip fabbed? While free-of-charge fabbing is quite nice, I think being NDA-free is even more important so all work including the tweaks necessary for fabbing can be published, e.g. on GitHub.

BTW, it will be nice to try this together with the OpenROAD tools [1]. They have support for Google's PDK on their to-do list (planned for q3, but I doubt it will be ready that fast).

[1] https://github.com/The-OpenROAD-Project https://theopenroadproject.org/

cottonseed 1378 days ago [-]
I don't think so. Tim explains in the talk, designs must be submitted via a public Github repository. I think the whole point is to create an open ecosystem.
lowwave 1378 days ago [-]
yup, NDAs destroys economic productivity.
lowwave 1378 days ago [-]
From talking to VC. NDA are useless. It really comes down to whether you trust the people or not. It is like patent just a gesture. Everything is in the implementation.
madushan1000 1378 days ago [-]
I think the plan is to let people do exactly what you're talking about(let people publish everything down to layout files). At least that's the impression I got from the talk. There is a distro of OpenROAD called OpenLane trying to target this PDK. fossie have a couple of more talks coming up in the next few months on tooling support including OpenROAD, OpenLane, etc.. And I think they're aiming for the first shuttle run in November, so the tooling will have to be ready at least on Q3.
tdonovic 1378 days ago [-]
That sounds pretty huge. I've never seen on Hackaday or similar people getting small runs of chip fabbed. What are the broader implications of this? Will other fabs start to lower the barrier to production as well?
ohazi 1378 days ago [-]
You generally don't do small runs of chips, unless cost is no object. The NRE costs of getting the masks made, even on older processes like these are still comfortably in the $X00,000 range, blowing past $1 million pretty quickly if you need a process that isn't ancient. That's without design software licenses, which can be hundreds of thousands more.

So the minimum order quantity usually needs to be at least in the tens to hundreds of thousands of chips if you don't want each chip to be a sizeable chunk of that initial cost.

It would be really nice to get to the point where small batch chips were viable though. One aspect is cost -- if they could get the NRE cost down to, say, $20k - $50k, and the software licensing cost down to zero, that would open up a lot of options.

The other aspect is the "dark art" nature of the process kit and communicating with the fab. If everybody assumes that chip design is expensive, they're going to be reluctant to even talk to the fab to see what options are available. If they see a bunch of people building interesting things with this shuttle program, then all of a sudden the fab is going to see more business interest as people try to figure out if there's a way to make their project work.

MayeulC 1378 days ago [-]
There are cheap-ish multi-project wafers (MPW).

These organizations typically also gives access to software design tools. But that's still a sizeable investment. Last project I've worked on used a (more expensive than usual I think) GloFo 22nm technology. Price was around €9k/mm², 9mm² was the minimum area. Still much more accessible to academia than individuals or open source projects, but not out of the realm of a crowdfunding campaign.

There are multiple chips that ought to be open source, broadly available, and cheap: AV1 decoders, small FPGAs, Wi-Fi or SDR chips, TMPs, and other crucial pieces for security, DIY/open HW projects, and basic computer building blocks. Most interesting to me are chips that would allow novel applications that commercial ventures would never look at, like open, hackable p2p WiFi meshes, or emulators-on-a-chip, or other application-specific coprocessors (protein folding, etc).

[1] ttps://mycmp.fr/technologies/process-catalog/

[2] https://europractice-ic.com/

namibj 1375 days ago [-]
A while back I came up with the idea of an ultra-miniature quadro copter with asynchronous outrunner motors who's stators would most likely be sintered (with or without a ferromagnetic matrix) to handle the power density, and a simple tube-shaped rotor (though a squirrel cage style might be better).

I'm thinking 5-20 mm rotor diameter (3M-750k rpm transonic limit), or maybe even smaller.

The interesting part would be an analogue ASIC that decodes an external control signal modulated onto the microwave (via rectenna) or optical (solar cell/photodiode) "wireless power" beam.

Demodulation would first do naive rectenna-based AM demodulation, followed by a bandpass and FM demodulation, revealing 12 carriers corresponding to the 4 3-phase motors, which are just FM-demodulated to yield the H-bridge control signals.

These would primarily be one xx MHz PLL and 12 lower-frequency ones spaced 50-200 kHz (the FM subcarrier's bandwith (assuming narrow-band FM) is twice the maximum motor field frequency), starting as low as feasible while still being able to use AC-coupling liberally.

Also either some amplifiers for (potentially-overdriven) "linear" H-bridge operation or (NE555-like?) PWM chopper drivers to exploit the winding inductance for less-wasteful H-bridge operation.

Far too much to realize in discrete circuitry, but nothing really fancy beyond a parametric PLL design. And not really realistic for a μC, either, because of brown-out resilience and overall latency.

At least the polyphase induction motors are very easy to drive, compared to the typical 3-phase permanent magnet outrunner motors used in most multicopters.

Depending on how predictable the effects of some tuning parameters are, maskless litho could allow for chips to be tuned to measured electro-mechanical properties of these sintered motors, reaching optimal drive waveforms. And for digital circuits, hard-wired ROM (security/shelf life/radiation-hardness) for individual chips or even doping-controlled ROM for anti-readout private/secret key storage.

I expect a maskless double-patterning ArF+immersion process allowing NDA-free-usage to be "the" thing that would enable true state-of-the-art experimentation and true ASICs (where the prototype needs an ASIC to be more than a paperweight after some photoshoots and staged interactions).

Feel free to contact me/let me know if you'd like further discussion(s).

phonon 1378 days ago [-]
ohazi 1378 days ago [-]
Yes, this is what Google is doing with this project.
phonon 1378 days ago [-]
cottonseed 1378 days ago [-]
efabless already runs the shuttle service that Google and efabless are going to fund here. From the efabless home page:

> $70K, 20 WEEKS, 100 SAMPLES

Note quite $50K, but close.

grandmczeb 1377 days ago [-]
Skywater (the actual fab Google is using for this program) also runs their own shuttle program[1]. The cost isn't public, but it's reportedly in the range of 40-50k.

[1] https://www.skywatertechnology.com/mpw-fastshuttle/

pkaye 1378 days ago [-]
I think it combines a multi-project wafer service with some open source tools. I think they are trying to foster a open source development type of atmosphere with chip design. The current commercial tools are expensive and difficult to use. Maybe this can be improved upon to make it accessible to more individuals.

https://en.wikipedia.org/wiki/Multi-project_wafer_service

novaRom 1378 days ago [-]
Broader implications:

* More people will learn complete digital design workflow; very helpful for many students of EE/CE

* More bright ideas and experiments in robotics/IoT

* More startups

cromwellian 1378 days ago [-]
This reminds me of how cubesats kind of got off the ground because some launch commpanies allowed extra spare capacity to be sold or donated to student projects.
hinkley 1378 days ago [-]
I couldn't tell which was the cart and which the horse, but the SpaceX telecommunications satellite launch I saw had a rideshare arrangement going on. I suspect what happened is that someone only needed half a payload, and SpaceX filled the rest with their own stuff. But the PR person made it sound like the opposite was happening.

I'm not sure what happens when they reach full capacity on their sat network. Space for research projects, or launching surplus consumables?

quyleanh 1378 days ago [-]
Well done Google. But there is still problem with EDA tool license... Is there any replacement for Cadence Virtuoso tool for chip design?
stephen_g 1378 days ago [-]
The efabless people that the talk mentions a bunch of times are using a fully open-source design flow but I think it's a bit hacky (as in, a bunch of command line tools from various open source projects, some of which may be unmaintained judging by the commit logs). They seem to have successfully fabricated a RISC-V based SoC with it though, which is crazy cool.

As somebody with a decent amount of FPGA experience, having a go at setting this software up and seeing if I can get anything to synthesise and through place and route is something I've been intending to have a play with, but I haven't had the spare time.

It uses yosis for synthesis and a few other tools for the rest of the process, and is called Qflow - http://opencircuitdesign.com/qflow/index.html

MayeulC 1378 days ago [-]
IIRC there are a couple ways to produce the intended design. At the end of the day, fabs often take layouts in the GDSII format, which is documented and open. The Klayout open source visualizer is industry-standard in my experience.

Now, how do you generate these layouts? It depends on what you are doing. If more on the experimental side of things, writing scripts to generate structures is fine, as long as these conform to the fab-provided design rules. Technically, that's still what everyone is doing at the industrial level, except the scripts -- often written in tcl -- are provided by the fab.

Now if you have some FPGA experience, you are probably interested in logic synthesis tools. There are a few ones, I've seen some academic with their own place-and-route stage, for instance. https://open-src-soc.org/program.html#T-CHAPUT does that, I think.

The slides linked above outline one of the possible ways to do this: leverage chisel ( https://www.chisel-lang.org/) and the FIRRTL intermediate representation for RTL description. A few tools can ingest the output and try to come up with a layout. Hammer (https://github.com/ucb-bar/hammer) is such a tool, but I don't think that PDK is available with it just yet. To be honest, I don't think commercial tools are that advanced, and it would be fairly doable to catch up.

There is some interesting work in this field, but since fabbing is expensive, it tends to be more within the academic community than the free software one. I'd look for papers, not on Github, though that's slowly changing.

The chip design world is a slow beast to turn around: everything in the fabrication process is optimized to maximize yield, hence very little leeway is allowed: "If it ain't broken, don't fix it" is the motto, for good reason: if changing humidity levels 0.2% can make a fab lose millions; they won't try to use new and experimental software.

I'm watching this space, notably with Verilog alternatives such as Migen. The open source community starts to embrace FPGA, wich is already great. I wish more manufacturers opened up their bitstream, so maybe we need an open FPGA? Though this free fabbing offer would be a great fit for Wi-Fi chips, I think. I wonder if People at openwifi (https://github.com/open-sdr/openwifi) are interested?

I hope that gives a few interesting pointers to whoever reads this :)

orbifold 1378 days ago [-]
Hammer is just a driver for tools that cost >100k to license. And that doesn't include access to memory compilers, which you would also need.
seldridge 1378 days ago [-]
There is an open PR adding support for the OpenROAD tools [^1]. So, there should be a flow that uses open source VLSI tools eventually.

The Google 130nm library is still filling a huge gap as all the open PDKs up to this point were "fake" educational libraries, e.g., FreePDK [^2]. You can run them through the a VLSI flow, but you can't tape them out.

[^1]: https://github.com/ucb-bar/hammer/pull/584

[^2]: https://www.eda.ncsu.edu/wiki/FreePDK

MayeulC 1378 days ago [-]
Thanks for the answer, I had forgotten about this. I looked at hammer some time ago, but we decided to go for PoC-like, less complex designs.
quyleanh 1378 days ago [-]
It is a bit complicated, isn't it? I do hope someday, there is solution which is fully opensource for this problem. And it's seems that this day will be long.
PopeDotNinja 1378 days ago [-]
Maybe we could get Cadence to open source 20 year old software for the 20 year old 130nm chips!
MayeulC 1378 days ago [-]
I wouldn't count on it: I don't think Cadence internals have changed much since then.

And if they were to, I'd say that Cadence itself isn't especially easy to use, nor complicated to replicate. It would feel more like a lock-in attempt.

The gEDA project would be a good place to start a new layout-level EDA. It has the necessary tools for simulation, already. Synthesis and place-and-route tools exist, but there are many alternatives, documentation is lacking, and I am not sure that PDK is compatible.

I don't know of a good open-source drawing tool, but it shouldn't be too complicated to make a basic one. The more complex part would be to integrate it with DRC (design rule check). An then the usual Layout Schematic Extraction to perform LVS (Layout Versus Schematic) simulation, antenna rules, etc.

Thinking about it, it's a good thing that node isn't too advanced. It reduces the design rules complexity by a few orders of magnitude.

quyleanh 1378 days ago [-]
> The more complex part would be to integrate it with DRC (design rule check)

If you have open-source design tool (including schematic simulation and verification), I think you will have open-source tool for physical verification. Assume that we still use rule check standard from Mentor Calibre, Assura.

> Thinking about it, it's a good thing that node isn't too advanced. It reduces the design rules complexity by a few orders of magnitude.

The complexity depends on process. The smaller process, the more complex. There is thousands of rule check even on the old process (180nm, 130nm...).

MayeulC 1378 days ago [-]
> If you have open-source design tool (including schematic simulation and verification), I think you will have open-source tool for physical verification

Right, though the manufacturer will usually automatically run design rules checks on the submitted designs (they don't want you to endanger other people's components due to density or antenna rules). But I was mostly thinking of it being integrated with a manual layout drawing tool: that's a nice-to-have, but not necessary, and more complex for a drawing tool. If you leave that out, creating a drawing tool should be pretty straightforward.

> The smaller process, the more complex.

Hence my point: it's easier to start with a less-complex process.

exikyut 1378 days ago [-]
>> Maybe we could get Cadence to open source 20 year old software for the 20 year old 130nm chips!

"Haha, funny!"

> I wouldn't count on it: I don't think Cadence internals have changed much since then.

(Sigh)

dTal 1378 days ago [-]
Can anyone venture a guess as to why Google might be doing this? What's the incentive structure here?
Koffiepoeder 1378 days ago [-]
The market of skilled hardware designers is running low: training costs are high, complexity has skyrocketted. By doing this they can increase attention to a field that is otherwise dominated by big corporates (already somewhat the case). The only way to have a sane and healthy chip market is to make 1) the entry barrier low and 2) stimulate innovation. This does both of that.
novaRom 1378 days ago [-]
Another point is that silicon-related tech is currently leaving US and begins booming in China.
baybal2 1378 days ago [-]
They want to spur competition among silicon suppliers.

The industry has become dangerously too consolidated, with like of Avago/Broadcomm trying to buy themselves a monopoly.

The big semi look at big dotcoms like Google as cows to milk, obviously they don't like it.

cottonseed 1378 days ago [-]
Google wants to create an open, innovative ecosystem for silicon so it will be easier for them to build accelerators for their workloads to meet the growing demand for compute. TPU is only one example of the kind of accelerators they want to build. Tim directly addresses this in the talk: https://youtu.be/EczW2IWdnOM?t=407.
amiga-workbench 1378 days ago [-]
I know Google absolutely loathes having Intel silicon in their datacenters, the management engine and other blobs can't be audited. It's conceivable they want to help bring open chips to market to try and remedy this problem.
MayeulC 1378 days ago [-]
I wrote something similar above, but maybe a bit like Microsoft acquiring LinkedIn? To get a list of chip designers that could possibly work at Google? Since the designs are open source, they can also evaluate their skill level. And lastly, the contributors are probably less likely to already work at IC companies that have NDAs, etc.
1378 days ago [-]
chaz6 1378 days ago [-]
Possibly a new source of intellectual property? I would be interested to read the terms and conditions.
truth_seeker 1378 days ago [-]
Any Chisel developers here ?

How fast is the iterative development and library ecosystem compared to native traditional RTL design tools ?

seldridge 1378 days ago [-]
I'm one of the Chisel devs.

My biased view is that iterative development with Chisel, to the point of functional verification, is going to be faster than in a traditional RTL language primarily because you have a robust unit testing framework for Scala (Scalatest) and a library for testing Chisel hardware, ChiselTest [^1]. Basically, adopting test driven development is zero-cost---most Chisel users are writing tests as they're designing hardware.

Note that there are existing options that help bridge this gap for Verilog/VHDL like VUnit [^2] and cocotb [^3].

For libraries, there's multiple levels. The Chisel standard library is providing basic hardware modules, e.g., queues, counters, arbiters, delay pipes, and pseudo-random number generators, as well as common interfaces, e.g., valid and ready/valid. Then there's an IP contributions repo (motivated by something like the old tensorflow contrib package) where people can add third-party larger IP [^4]. Then there's the level of standalone large IP built using Chisel that you can use like the Rocket Chip RISC-V SoC generator [^5], an OpenPOWER microprocessor [^6], or a systolic array machine learning accelerator [^7].

There are comparable efforts for building standard libraries in SystemVerilog, notably BaseJump STL [^8], though SystemVerilog's limited parameterization and lack of parametric polymorphism limit what's possible. You can also find lots of larger IP ready to use in traditional languages, e.g., a RISC-V core [^9]. Just because the user base of traditional languages is larger, you'll likely find more IP in those languages.

[^1]: https://github.com/ucb-bar/chisel-testers2

[^2]: https://vunit.github.io/

[^3]: https://docs.cocotb.org/en/latest/

[^4]: https://github.com/freechipsproject/ip-contributions

[^5]: https://github.com/chipsalliance/rocket-chip

[^6]: https://github.com/antonblanchard/chiselwatt

[^7]: https://github.com/ucb-bar/gemmini

[^8]: https://github.com/bespoke-silicon-group/basejump_stl

[^9]: https://github.com/openhwgroup/cva6

truth_seeker 1378 days ago [-]
Gracias.
novaRom 1378 days ago [-]
Can someone please tell me how photo-masks are produced? I don't understand how can tiny features be printed at almost the same scale as a final structure? With a laser beam?

Say, as an input you have a layer description (schematics) - how can you transfer it to a tiny scale so precisely to produce a mask?

ric2b 1378 days ago [-]
They aren't built at the same scale, they're much larger than the final structure and lenses are used to scale the image down to the desired size.

Here's a video form Intel on how they are made: https://youtu.be/u3ws0UebnSE

Apparently they use "electron beams", not sure what those are, they sound similar to lasers but with electrons, from this video: https://youtu.be/PWV9pvdRBNY

novaRom 1378 days ago [-]
Wow, this explains why production of a mask takes 5 days (as said in first video):

https://en.wikipedia.org/wiki/Electron-beam_lithography

asgeir 1378 days ago [-]
Wouldn't that just be something like CRT? https://en.wikipedia.org/wiki/Cathode-ray_tube
imtringued 1378 days ago [-]
It's probably closer to how an electron microscope works.
jcun4128 1377 days ago [-]
This was also a cool video I saw recently about lithography with plasma lasers https://www.youtube.com/watch?v=f0gMdGrVteI around 7:14 in particular
rwmj 1378 days ago [-]
Photographic reduction. The masks are much larger than the final chips. https://commons.wikimedia.org/wiki/File:Semiconductor_photom... (Actually AIUI in EUV photolithography you can't use transparent masks, but must use a kind of mirror with the pattern etched onto it.)
novaRom 1378 days ago [-]
According to Wikipedia masks are only 4 times larger - it is still very tiny.
ArchD 1378 days ago [-]
Lenses are used so the feature size on the mask is larger than the feature size on the wafer.

https://www.nikon.com/about/technology/product/semiconductor...

novaRom 1378 days ago [-]
Yes, but only few times. Still not clear how an initial mask has been produced? From file to mask - a kind of printer or a laser?
baybal2 1378 days ago [-]
for >1 micron it's optical lithographic transfer, for <1 micron, it's e-beam lithography.
WatchDog 1378 days ago [-]
My understanding of ASIC production, is that new circuit designs are capital intensive, they require masks to be produced and machines to be configured for the given pattern.

Are older processes more automated?

Can the 130nm production line, produce many different designs without any manual intervention?

bradstewart 1378 days ago [-]
Mask sets will still be required for each chip (to my knowledge), but they are significantly cheaper on older processes like 130nm.

The process design kit (or PDK) mentioned in the article takes care of "configuring the machiines". The PDK provides describes how to construct low-level primitives (the instruction set, if you will) for the specific fab. Designers then layer on their logic circuits using those primitives.

1378 days ago [-]
jitendrac 1378 days ago [-]
That is great. It will encourage new hobbyist opensourse Eco-system around hardware community just like many FOSS communities.

Even engineers/students from countries with less resources will now be able to design make prototypes in viable way.

phendrenad2 1378 days ago [-]
This is cool! But I fear the workflow to get a chip out the door requires a lot of niche specialized knowledge. Making a logic design work on an FPGA is much easier, because the chip overhead (stuff like I/O pins) is all handled for you. If I had to design my own I/O pins at the silicon level, I wouldn't know where to start. And having access to open-source tools that give me the ABILITY to build a chip doesn't help with the knowledge I'm missing.

I think, however, that this may help Google integrate into academia. I can imagine a lot of MSEE and PhD students are looking at this hungrily.

riking 1378 days ago [-]
The project comes with a standard harness around your 10mm^2 design, with provided I/O and a working RISC-V supervisor CPU.
canada_dry 1378 days ago [-]
> a logic design work on an FPGA is much easier

Whelp... definitely crossing fabbing off my list.

makapuf 1378 days ago [-]
Interesting, 130nm was achieved by Pentium III as a reference: https://en.m.wikipedia.org/wiki/130_nm_process
cmrdporcupine 1378 days ago [-]
I have a friend who has a Verilog clone of the C64 VIC-II chip, which he has interfaced into a real C64 and it's running pretty much everything, demos, etc. even supports weird things like the lightpen.

I wonder if his project would fit the bill here... real VIC-II chips are dying all over the place and getting hard to find... manufactured ASICs to replace them could be a popular item....

ajb 1378 days ago [-]
So, this doesn't appear to have been announced by google. It does seem to be real but the OP may be jumping the gun a bit.

The authoritative source seems to be the slides of this guy at google: https://docs.google.com/presentation/d/e/2PACX-1vRtwZPc8ykkk...

From the slides this is "current plans, subject to change". This is an 'open source shuttle process'. Shuttle processes are a relatively cheap way of making small numbers of chips (it is actually more costly per chip, but the fixed cost is smaller). There will be some kind of approval process, and I would imagine that there is a capacity limit for both the number of chips and number of projects.

(I didn't have time to watch the talk, so the above is just from the slides)

rudedogg 1377 days ago [-]
- You get somewhere around ~100-400 chips, TBD later

- Each shuttle run will accept 40 projects, if more than that apply a selection process will be used. A lottery was mentioned as a possibility.

- First shuttle run this November

Like you mention everything is just planned at this point, but the speaker said these were roughly their goals.

ajb 1377 days ago [-]
Thanks. That sounds about what I expected.
cottonseed 1378 days ago [-]
Maybe watch the talk first before commenting? All these questions are answered in the talk.
1378 days ago [-]
ajb 1378 days ago [-]
Well since you evidently did have time to watch the talk, perhaps you also have time to enlighten us about these questions. If not, maybe don't criticise those of us who used at least some of our time to dig out some information for the thread.
mav3rick 1378 days ago [-]
You could have just watched the talk instead of reprimanding him.
jcun4128 1378 days ago [-]
How "bad" is that compared to standard/common 14nm, etc...
DCKing 1378 days ago [-]
130nm was used to make the Athlon XP, Athlon 64, Pentium M, Pentium 4 and PowerPC G5 in the 2001-2003 timeframe [0]. So at the peak of 130nm's performance spectrum, it was able to produce stuff that can still run 2020 software quite okay. The Athlon 64 is probably the best 130nm silicon produced in its heyday and it's in the ballpark of a Raspberry Pi 4 (which has a 28nm SoC) in single core benchmarks.

I don't think this program is meant or likely to produce high frequency 100mm2+ chips (and it's worth remembering those chips had a lot of engineering effort put in them outside of manufacturing process) but it should permit chips of somewhat decent performance. It's a very generous thing!

[0]: https://en.wikipedia.org/wiki/130_nm_process

walrus01 1378 days ago [-]
As I recall from that generation, also the first models of AMD opterons, commonly built into dual socket motherboards. For the time they were very speed competitive with the Intel option.
innocenat 1378 days ago [-]
I think at that time, Opterons was THE server processor.
walrus01 1378 days ago [-]
I recall the dual socket (everything was single core at the time) Xeon being particularly unimpressive.

In fact it was somewhat of a step backwards from the better thermals/power efficiency of a dual socket, 1.13 to 1.4 GHz / 512KB cache Tualatin pentium 3.

jcun4128 1378 days ago [-]
I see I remember using Dimension 4500's that had Pentium 4
lnsru 1378 days ago [-]
I don’t know, why do you refer 14 nm as common. It’s for the newest consumer toys. Regular electronics in your dish washer is made using 65, 90 or even 130 nm process.
why_only_15 1378 days ago [-]
Samsung galaxy A20, a ~$150 phone, uses the exynos 7884 which is fabbed on a 14nm process

pricing: https://www.androidauthority.com/cheap-android-phones-269520... soc: https://www.samsung.com/semiconductor/minisite/exynos/produc...

innocenat 1378 days ago [-]
That falls into "newest consumer toys". The bulk of the IC chip is no where near 14nm process.
jcun4128 1378 days ago [-]
yeah maybe 22nm is more common
kn0where 1378 days ago [-]
Pentium 4 was on a similar process size: https://en.wikipedia.org/wiki/Pentium_4
TheSpiceIsLife 1378 days ago [-]
This range of 32bit Cortex chips are listed as 130 - 40nm

https://en.wikipedia.org/wiki/STM32

goatsi 1378 days ago [-]
130nm chips first arrived in 2001, so it's about 20 year old technology. This page has a few examples: https://en.wikichip.org/wiki/130_nm
gentleman11 1378 days ago [-]
Simple question: aren’t you basically not allowed to make chips because of patents? Not literally forbidden, but aren’t there so many patents that you can’t really work without violating one, even if you have never heard of it or the technique before? It just sounds so hazardous
awalton 1378 days ago [-]
Is enough of the PDK open now to allow for actual hacking on devices? I have a rather simple analog chip I'd love to make for my own personal uses (I'd love a really long modern bucket brigade device to build gritty analog delay lines for synth hacking)...
chvid 1378 days ago [-]
Sounds interesting but what you build as an open source chip?

I mean 130nm is 20 year old technology and you can buy general purpose CPUs today which are night and day faster than anything made with 130nm. Allowing you to emulate anything specialized using sofware.

Symmetry 1378 days ago [-]
Gate level emulation is really, really slow. If you've got a nice abstraction like the x86 ISA you can simulate a chip at that level far faster but if you're interested in the net level design rather than the abstraction emulation will be way, way slower. At least in throughput, it takes a long time to fab a chip and so you really ought to do emulation first in any event.

And for gate/line level effects things get slower still. Back when I was doing my master's thesis I was running simulations over the weekend on sequences of 100s of instructions in SPICE.

navanchauhan 1378 days ago [-]
Can I in theory build one optimised for running one program? Will it be of any benefit?
riffraff 1378 days ago [-]
yes, but it depends on the program, that's what the whole ASIC industry for bitcoin mining is.
navanchauhan 1378 days ago [-]
I was thinking about AutoDock Vina ( Molecular Docking Software ), I have literally 0 knowledege about hardware :(

Then again, this is going to be a really fun experience

exikyut 1378 days ago [-]
I initially did an image search to get a quick idea of what you were referring to. "Oh, that looks commercial/expensive..." - but no, it's open source, under the Apache license. Which means the only question is how motivated you really are to speed it up. If your answer is "really really __REALLY__ motivated", then...

- Reimplement the whole thing, in your choice of language, strictly without consideration of performance, to concretely grasp the implementation.

- Rewrite your reimplementation efficiently, using profiling etc, and using SSE/AVX or related techniques if/as possible. (I noticed references to Monte Carlo simulation in the code, and found some noise online that suggests this is vectorizable. I don't understand how MC is being used in the code though.) FWIW assembly language is likely 95-99% not worth chasing instead of Rust or C; one of the few real-world scenarios that call for asm is software video decode/encode, which boils down to patterns of hardcore number crunching that compilers are regarded to optimize poorly. I do not know whether this program is slow because it is poorly optimized or slow because it is simply computationally expensive.

- Rewrite your implementation to run on a GPU, if possible, using OpenMP or CUDA. (This may require implementing your own engine that achieves the same goals as the existing engine, after you achieve a high-level understanding of why the engine works the way it does, because you may need to rearchitect the way the program works in order to cram it into a GPU.)

- Reimplement your implementation in VHDL so it will run on an FPGA.

- Retarget your VHDL so it can be fabbed on a fixed-function ASIC.

This would be my high-level Handwavy Armchair Guide to achieving what you want :)

It's possible that the GPGPU or FPGA milestones will give you a significantly appreciable many-x performance boost. That may be 2x or 10x or 100x; you will be able to find out what is possible almost immediately, as you build your brain-dead implementation and go down little research/analysis paths figuring out how everything works.

It's also possible that the current implementation is poorly designed, and that sticking a profiler on it may find low hanging fruit. Likewise, it's equally possible the current implementation is well-tuned above average (despite being written in C++).

Oh, I found this random link that may be uninteresting or useful: https://news.ycombinator.com/item?id=18628326

dekhn 1378 days ago [-]
Yes, you can build dedicated ASICs. No, it's not worth it for docking software.
ascorbic 1378 days ago [-]
According to Wikipedia there has been some success building FPGAs for Autodock, so maybe it could be. https://en.m.wikipedia.org/wiki/AutoDock
dekhn 1378 days ago [-]
yes but since drug discovery isn't bottlenecked by virtual docking throughput, it doesn't matter.

We've seen similar approaches applied to BLAST, and in the end, everybody ends up giving up the ASIC or the FPGA because it's not cost effective long-term.

mhh__ 1378 days ago [-]
If we end up using technology like Clash it might be "trivial" to go from software to HDL (I.e. exploiting Haskell's compartmentalisation).
Symmetry 1378 days ago [-]
Generally you get somewhere between 2 and 3 orders of magnitude power/performance benefit from realizing an algorithm in hardware if it's suitable for that sort of thing. If you're dealing with random memory accesses from a large pool it won't be but streaming tasks like media codecs or encryption work really well.
blackrock 1378 days ago [-]
I wonder if you can make micro machines at this level? The MEMS thing.

I always wondered why you needed gearing mechanisms in a micro machine. Has there ever been a practical application for gears in MEMS?

stephen_g 1378 days ago [-]
Not with this PDK or process, no. MEMS processes are quite specialised, and I beleive this project only supports digital standard cells currently, with IO and analogue/RF stuff coming out eventually (it's on the roadmap in the slides).
qaute 1378 days ago [-]
> I wonder if you can make micro machines at this level? The MEMS thing.

At this size range, though state-of-the-art MEMS (mechanical vibrating frequency filters for RF receivers in phones, accelerometers) can have sub-100nm dimensions, basic accelerometers, pressure sensors, and inkjet heads are absolutely doable.

> Not with this PDK or process, no. MEMS processes are quite specialised.

But yeah, this is the problem. Although ICs and MEMS devices are made with similar tools, MEMS usually needs processing steps that don't play nicely with the steps in an IC process (e.g., etching away huge amounts of silicon to leave gaps and topography, or using processing temperatures and materials that mess up ICs). This SkyWater process cannot do MEMS.

A more general problem is that different MEMS devices often need different incompatible process steps, so a standardized process is infeasible (though http://memscap.com/products/mumps/polymumps tries).

However, there is a tiny chance that, if we get enough detail on the process steps and leeway in the design rules, a custom layout could implement a rudimentary accelerometer or something that works after post-processing (say, a dangerous HF bath), but only with intimate knowledge of said process steps (e.g., internal material stress levels) and a lot of luck.

qaute 1378 days ago [-]
> I always wondered why you needed gearing mechanisms in a micro machine. Has there ever been a practical application for gears in MEMS?

IIRC, Sandia Lab's SUMMiT V process (the source of videos like [1]) was funded in part to make mechanical latches and fail-safes for nuclear weapons, but I'm not sure what's currently in use for obvious reasons. I don't think they found many other practical applications, though experimentation led to TI's DMD chips, among other things.

Occasionally, MEMS techniques are used to make (relatively large) gears for watches.

I've also seen people try to use gears for microfluidic pumps, but I don't think any are much better than current simpler solid-state approaches.

[1] https://www.youtube.com/watch?v=GiG5czNvV4A

blackrock 1378 days ago [-]
Fascinating, thanks for the info. Some ideas about this:

(1) I wonder if you can make an unpickable lock with MEMS.

Say, if you get a finger print scan, or retinal scan, then the device would need a positive confirmation in order to unlock itself.

I have no idea how practical this is, but it sounds like some kind of Superman genetic authentication system, in order to unlock the information crystals.

(2) The other thing is, can the gears be used to store potential energy? Such as using the microfluidic pumps? Or a microspring?

Where maybe you can use another piezoelectric device, or solar, to provide the electricity to run the gears, in order to store potential energy during peak production hours.

Then, when you need it, you release the potential energy.

The key here might be if you can build a micro electric generator. But I don’t know if you can deposit a pair of opposing micro magnets on a MEMS unit.

But if this can work, then you would need a lot of units, in the tens of billions, in order to produce enough electricity to do something useful.

qaute 1378 days ago [-]
> I wonder if you can make an unpickable lock with MEMS

I'm not sure what you mean. MEMS are generally tiny and I'm not sure why you'd need a ~1mm safe? But MEMS relay switches, which mechanically connect/disconnect circuits, exist.

> The other thing is, can the gears be used to store potential energy?

Springs and fluid reservoirs aren't very energy or power dense; good batteries and capacitors are much more effective and reliable. MEMS flywheels have been built and are potentially competitive, but are also extremely tricky to build.

> The key here might be if you can build a micro electric generator.

This is doable and an area of active research (for, say, charging low-power devices when a human walks, definitely not grid-scale power). Magnets are hard to work with in MEMS, so other techniques (piezoelectricity, triboelectricity) are used. [1] is currently badly-written but mentions most important bits.

[1] https://en.wikipedia.org/wiki/Nanogenerator

1378 days ago [-]
ibobev 1378 days ago [-]
Could someone explain, is there any advantage of producing a 130nm custom SoC compared to using a lower node FPGA for the same design?
rnestler 1378 days ago [-]
Current consumption. FPGAs are quite energy intensive compared to ASICs.

Also for analog stuff you can't use FPGAs. And if you need an ASIC anyways for that why not include the digital part as well?

Symmetry 1378 days ago [-]
The crossover point where ASICs become less expensive than FPGAs is also lower than you might think even including mask costs, provided it's on an older process node.
derefr 1378 days ago [-]
So, until now, there's been this niche for FPGAs, where people would buy them in decent numbers to use with static programming, in production devices, simply because they needed some custom DSP or some-such, but the capital costs of an ASIC fab-run would be a killer for their project.

Has this announcement thrown that use-case for FPGAs out the window?

gsmecher 1378 days ago [-]
The economics of FPGAs and ASICs over the past few decades are covered really well in [1]. It's almost always about production volume. The FPGA's ability to be reprogrammed is often a convenient side-effect.

In short, no, this doesn't impact the trade-offs and wouldn't even if Google provided this service as a commercial print-on-demand offering. You can get an ASIC fab'd on older nodes for surprisingly cheap, if you have the know-how and access to tools [2].

When power is a first-class design problem (it frequently is), even an "old" 28nm FPGA like Xilinx's 7 series will run rings around an 130nm ASIC. The extra silicon you're powering in the FPGA is more than offset by the economical access it gives you to modern nodes with lower voltages.

[1]: https://ieeexplore.ieee.org/document/7086413 [2]: https://spectrum.ieee.org/tech-talk/computing/hardware/lowbu...

chrisshroba 1378 days ago [-]
Could anyone offer an explanation of what this means, for all of us who have no experience with hardware at all?
lokl 1378 days ago [-]
Could this be suitable for a camera sensor? I don't know anything about hardware, but I am intrigued by the idea of exploring new camera sensor ideas.

Edit: Nevermind, another comment says 10 mm^2 per project. That's probably too small for the type of camera sensor I have in mind.

matheusmoreira 1378 days ago [-]
That's really awesome. If that gives us widely available open source hardware, our computing freedom will always be safeguarded. We'll always be able to run any software we want even if the hardware is not as good as proprietary designs.
dooglius 1378 days ago [-]
If you wanted to do something with a hardware root-of-trust, would the GDSII leak needed secrets (i.e. any private keys could be extracted by looking at what you're required to open up), or is that done in some special post-fab way?
yjftsjthsd-h 1378 days ago [-]
Could you burn in the private key using fuses?
riking 1378 days ago [-]
Take a look at the Google Titan chip slides for an idea of how to implement this: https://www.hotchips.org/hc30/1conf/1.14_Google_Titan_Google... Video: https://youtu.be/ve_64dbM4YI?t=3089

Specifially, slides 35-40. You burn a feature fuse to unlock manufacturing test features. The device is personalized with a serial number + told to generate private key + record stored in database. Then, the key is locked in by burning a second feature fuse that disables any future writing to those segments.

ur-whale 1378 days ago [-]
This is fantastic, for many reasons, but the two that come immediately to mind are:

    - amazingly good for security.
    - finally the public at large will get to understand *in details* how an ASIC is designed.
chrismorgan 1378 days ago [-]
Meta: please don’t use preformatted text for lists. It makes reading much harder, especially on narrower displays. Just put a blank line between each item, treat each as a paragraph.
ur-whale 1378 days ago [-]
Thank you for listing your personal preferences, but I also happen to have mine.
yummypaint 1378 days ago [-]
Anyone have a sense of how easy it is to audit/verify devices made with this process? I would love to see some properly trustworthy chips for end-to-end encryption come out of this.
1378 days ago [-]
neop1x 1377 days ago [-]
Hmm, sounds like Google is hunting for chip design talents. :) We will fab your design and if it looks good, come work for us. :P
kwccoin 1377 days ago [-]
The difference is one is pure public good but hardware is not that pure. And hence you have other concerns and other means.
unnouinceput 1377 days ago [-]
Quote: " All open source chip designs qualify, no further strings attached!"

There is no such thing as free lunch! I really wonder what is Google's game plan with this. 20 years ago they started to made maps + email + office +... free for everybody, but the game plan was they gathered everything about everybody, so now we know. Sorry Google, I don't trust you one bit anymore.

threshold 1378 days ago [-]
Fantastic Google! Dream come true
mysterydip 1378 days ago [-]
Would this make it possible to reproduce some historic-but-rare chips like the 4004?
jecel 1378 days ago [-]
This is a 0.13µm CMOS process while the 4004 was made using a 10µm PMOS technology. So the electrical characteristics would not be the same. If you don't care about that then the answer is "yes".

An attempt to do something like this would have a Z80, a 6502 and a 68000 in a single chip (none of them are rare, however):

https://www.crowdsupply.com/chips4makers/retro-uc

jabl 1378 days ago [-]
Well, the 6502 was famously NMOS which isn't CMOS either. Though wikipedia tells me there is a '65C02' which is a CMOS version of the 6502.
resters 1377 days ago [-]
It would be nice to get all of the HDL used by the HPSDR project fabbed this way.
BurnGpuBurn 1378 days ago [-]
Are there any Risc-V designs that would plug in to this?
cottonseed 1378 days ago [-]
Yes, there are lots of open-source RISC-V cores. Tim Edwards of efabless has another talk about creating a RISC-V based ASIC SOC: https://www.youtube.com/watch?v=EsEcLZc0RO8 based on PicoRV: https://github.com/cliffordwolf/picorv32. PicoRV is part of the efabless IP offerings. The chips will have a PicoRV harness on them.
BurnGpuBurn 1378 days ago [-]
Thanks!
fouc 1378 days ago [-]
What are the chances that google will add their own hidden or proprietary circuitry to any open-source chips? They'll add all sorts of "Security" and "Tracking" features..
wolfd 1378 days ago [-]
Approximately zero. They have nothing to gain from doing so, and everything to lose. It isn't a website we're talking about, adding that kind of complexity to a chip would be highly obvious to the people who integrate it, who aren't from Google.
MaxBarraclough 1378 days ago [-]
To a chip? Doesn't seem likely. Adding something like an Intel Management Engine is quite a task, and they'd look awful if they got caught trying it in secret. If they're just making the CPU, in isolation from the rest of the system, I imagine it would be just about impossible to do something like that.

As to whether such changes could be detected, given that the intended design is known, I'm not sure. Someone more knowledgeable than me might be able to comment on that.

pjc50 1378 days ago [-]
If you hand them GDSII then fiddling with it is very time-consuming and difficult, but can be spotted by looking at the resulting chip under a microscope.

(Not entirely simple at 130nm as this is shorter than the wavelength of visible light!)

imtringued 1378 days ago [-]
You don't need to look at the smallest features of a transistor to notice that the chip has 30% more transistors than your original design.
Symmetry 1378 days ago [-]
Also, adding something like that would be an incredible amount of work. Basically you would have to totally re-do the layout even if you're just adding a macro somewhere and totally re-design it if you're not going to end up causing a drastic decrease in max clock rate. That's for an management engine or tracking style thing. A backdoor that makes #5F0A40A3 equal to every other number for password bypass wouldn't be that invasive and might only slow things down by a little bit so I guess that's a possibility if a certain design becomes really popular?
jecel 1378 days ago [-]
The designs have to fit in 10mm² but the total chip will be 16mm² with the pads, a RISC-V and some interfaces supplied by Google. They could obviously fit some trick into their part, but given that it too will be open source you can inspect it if you don't trust them.
lizhang 1378 days ago [-]
Can anyone recommend some resources for jumping into asic design to take advantage of this offer?
temptemptemp111 1378 days ago [-]
Let's build a Ryzen 9 inspired RISC-V with more care for latency please! :)
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 10:04:56 GMT+0000 (Coordinated Universal Time) with Vercel.