NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Nvidia to Acquire Arm for $40B (nvidianews.nvidia.com)
DCKing 1325 days ago [-]
This is terrible. Not really just because of Nvidia - which has a lot of problems I've previously commented on the rumors of this [1] - but Nvidia's ownership completely changes ARM's incentives.

ARM created a business model for itself where they had to act as a "BDFL" for the ARM architecture and IP. They made an architecture, CPU designs, and GPU designs for others. They had no stake in the chip making game, and they had others - Samsung, Apple, Nvidia, Qualcomm, Huawei, Mediatek, Rockchip and loads of others make the chip. Their business model was to make the ARM ecosystem accessible for as many companies as possible, so they could sell as many licenses as possible. In that way, ARM's business model enabled a very diverse and thriving ARM market. I think this is the sole reason we see ARM eating the chip world today.

This business model would continue to work perfectly fine as a privately held company, or being owned by a faceless investor company that wants you to make as much money as possible. But it's not fine if you are owned by a company that wants to use you to control their own position in the chip market. There is no way Nvidia (any other chip company, but as laid out previously Nvidia might even be more concerning) will spend 40 billion on this without them deliberately or inadvertently destroying ARM's open CPU and GPU ecosystem. Will Nvidia allow selling ARM licenses to competitors of Nvidia's business? Will Nvidia reserve ARM's best IP as a selling point for its own chips? Will Nvidia allow Mali to continue existing? Any innovations ARM made previously it sold to anyone mostly indiscriminatorily (outside of legal restrictions), but now every time the question must be asked "does Nvidia have a better propietary purpose for this?". For any ARM chip maker the situation will be that Nvidia is both your ruthless competitor, but it also sells you the IP you need to build your chips.

EDIT: ARM's interests up to last week were to create and empower as many competitors for Nvidia as possible. They were good at that and was the root of the success of the ARM ecosystem. That incentive is completely gone now.

Unless Nvidia leaves ARM alone (and why would they spend $40B on that??), this has got to be the beginning of the end of ARM's golden age.

[1]: https://news.ycombinator.com/item?id=24010821

klelatti 1325 days ago [-]
Precisely, plus just consider the information that Nvidia will have on all its competitors who use Arm IP.

- It will know of their product plans (as they will need to buy licenses for new products).

- It will know their sales volumes by product (as they will need to pay fees for each Arm CPU sold).

- If they need technical help from Arm in designing a new SoC then the details of that engagement will be available to Nvidia.

How does this not give Nvidia an completely unfair advantage?

DCKing 1325 days ago [-]
I wouldn't use the term "unfair" here. There's also just three x86 licensees in the world and people don't usually consider that an affront. You buy, you control, that's how the world works.

But I do think it's important that we recognize that we're going from a position of tremendous competitiveness to a much less competitive situation. And that will be a situation where ARM will be tightly controlled and much less inducive to the innovation we've seen in the last years.

a1369209993 1325 days ago [-]
> There's also just three x86 licensees in the world and people don't usually consider that an affront.

Hi! Counterexample here.

PixelOfDeath 1324 days ago [-]
ME TO!

Especially because of the x86 oligopoly I would think that Arm is so much more important as an ecosystem.

tboerstad 1325 days ago [-]
The three licensees would be Intel, AMD and VIA
dralley 1325 days ago [-]
VIA doesn't have a license for the AMD64 instruction set, however. Intel and AMD did a cross-licensing deal so they have a co-equal position.
StillBored 1325 days ago [-]
Which hasn't kept them from building 64-bit cores with everything including AVX-512..

https://fuse.wikichip.org/news/3099/centaur-unveils-its-new-...

IIRC you watch the "Rise of the Centaur" documentary they talk about the Intel lawsuit, and the corresponding counter suit that they won. Which makes the whole thing sound like MAD.

More interesting there is https://en.wikichip.org/wiki/zhaoxin/kaixian

yearoflinux 1323 days ago [-]
Why doesn't someone like, say, TSMC acquire VIA in a deal where M&A experts do all their jugglery and cunning to make the licensing stay I wonder?
thesz 1323 days ago [-]
Because TSMC can't compete with their customers and of them is AMD, I believe. TSMC promised not to build their chips, AFAIK, for exactly non-competition reason.
deaddodo 1325 days ago [-]
VIA isn't privy to the cross-licensing agreement Intel/AMD signed which gives them full access to the other's technology portfolio (since AMD was a second source foundry for Intel, while Cyrix/VIA was not). However, the Court ruled (when they were still Cyrix) that they are allowed to sell any clean-room designed x86 hardware. So while they don't have access to internal architectural documents, they're allowed to reverse-engineer/duplicate x86.

In some ways, it actually frees them up a bit; as they don't have to reciprocate any efforts with Intel. Which they tried to leverage with their Padlock technology. Unfortunately, their marketshare limits any real practical usage of those benefits.

aasasd 1324 days ago [-]
> they are allowed to sell any clean-room designed x86 hardware

Uuuuh, presumably the case was about patents, right? I don't see how cleanroom-ing is fine with regard to patents.

deaddodo 1324 days ago [-]
Source: https://law.justia.com/cases/federal/district-courts/FSupp/8...

Summary: Cyrix (and subsequently, VIA) have an implied license to Intel's patents; allowing them to develop x86 hardware.

How that differs from AMD: AMD and Intel have a full cross-license on technologies. This means, AMD can utilize Intel resources to integrate AVX-512, a ring-bus, etc (and vice versa). VIA can not. They must develop those technologies independently, in a compatible manner.

murgindrag 1324 days ago [-]
That case was about patents, but I'm not sure patents are as central here. Right now, there is a patent minescape of mutually assured destruction between everyone. If anyone exercises their patent rights, no one can make anything modern, and the whole industry would come screeching to a halt.

Oddly enough, integrated circuits have their own IP scheme. IP has copyright, patent right, trade secrets, trademarks, and mask works. You rarely learn about mask works, since they're so narrow.

https://en.wikipedia.org/wiki/Integrated_circuit_layout_desi...

I think a lot of this would hinge on that corner of the law.

Dylan16807 1325 days ago [-]
Thankfully those patents are about to expire.
granzymes 1325 days ago [-]
The original patents have expired.

The problem with waiting for the patents to expire is you're always stuck 20 years behind.

Dylan16807 1325 days ago [-]
The ones necessary for x86-64, I mean.

Being 20 years behind isn't a particularly big deal if those 20 years are basically just SSE3, SSE4, AVX.

klelatti 1325 days ago [-]
That's a good point. I guess it is "unfair" but that isn't necessarily an argument against it - as you say lots of things are unfair.

But, given the high market share of Arm in several markets allowing one firm the ability to use that market share to gain competitive advantage in related markets seems to me to be deeply problematic.

bigyikes 1325 days ago [-]
Noob question: how does the x86 licensing work? Does Intel still own the rights? Why would they license to AMD? Why don’t they license to others?
klelatti 1325 days ago [-]
Very briefly:

- They don't license because they can make a lot more money manufacturing the chips themselves.

- AMD also has the right to x86 because Intel originally allowed them to build x86 compatible chips (some customers insisted on a 'second source' for cpus) and following legal action and settlements between the two companies over the years there is now a comprehensive cross licensing agreement in place. [1]

- Note that AMD actually designed the 64 bit version x86 that is used in most laptops / desktops and servers these days.

[1] https://www.kitguru.net/components/cpu/anton-shilov/amd-clar...

deaddodo 1325 days ago [-]
> some customers insisted on a 'second source' for cpus

IBM required it. It was their business MO to protect themselves from losing access to a technology or having the market be dictated by one company. Intel acquiesced so that they would be the architecture of the IBM PC-series.

tibbydudeza 1324 days ago [-]
Afaik they could implement the ISA but not the Intel interconnect bus so AMD licensed Hypertransport (as used in the Alpha) from DEC so you had different motherboards/chipsets.
deaddodo 1324 days ago [-]
All second source AMD (and other manufacturer) chips were pin-compatible with Intel chips. Compare the i386/am386, the i486/am486, etc. Even a few non-second source (e.g. the unique uArchs that came later) designs were still pin-compatible, such as the 5x86 and K6/K6-II/K6-III. In fact, the third parties were the first to break away from electrical compatibility with their Super Socket 7 motherboards.

Intel did change sockets as a means to disallow socket-compatibility; forcing consumers into their architecture if they bought their motherboard, but that had no effect on AMD's development. AMD had purchased a significant share of DEC's engineering portfolio and, along with it, their employees. Those employees then developed the K7 (Athlon) architecture around some of the Alpha's technological advantages, which included HyperTransport, a multi-issue FPU (fixing one of the major issues AMD had struggled with and bringing them ahead of Intel), etc.

pjmlp 1324 days ago [-]
Those cross licenses are the only reason why this laptop isn't running on Itanium, without them there wouldn't exist AMD64.
rcgorton 1324 days ago [-]
The 'only' reasons (plural) that your laptop does not run Itanium: - battery life: an Itanium in a laptop would run for less than an your - too hot: Itanium servers ran incredibly hot due to power. A laptop would probably catch fire in short order - Windows was not supported on Itanium - cost: would you pay $5K for an Itanium laptop?
whereistimbo 1325 days ago [-]
some customers? I heard it was IBM.
VectorLock 1325 days ago [-]
In simplest terms, the AMD & Intel perpetual license goes all the way back to the early 80s and Intel has tried to legally harangue their way out of it ever since, with limited success.
deaddodo 1325 days ago [-]
Not really. Intel hasn't shown any legal or business moves to try to cancel it ever since they developed EM64T and integrated it in their entire product line. AMD existing protects them from antitrust considerations. They would just prefer AMD stay in the budget realm so they can continue selling high margin products (enthusiast, laptop and server chips) without competition.

Not that they would complain if AMD lost their license or ceased existing, they just don't seem to actively be trying to cancel the license at this point.

i386 1325 days ago [-]
> You buy, you control, that's how the world works.

Only in countries with poor regulators like the states does it work like this.

adventured 1325 days ago [-]
No, it works like that all across the world in fact.

It works like that in Japan: say hello to the many giant conglomerates that rule their economy top to bottom.

It works like that in South Korea: say hello to Samsung, roughly 12% of South Korea's GDP in a given year (Walmart by contrast is equal to 2.5% of US GDP, and that's crazy big).

It works like that in Germany: say hello to a parade of big old industrial giants that have dominated their economy for most of the past century and will continue to.

It works like that in China, openly so: they intentionally go out of their way to promote giant national champions at the expense of everyone else.

It works like that in France: their largest corporations and richest individuals are even larger in relation to their economy and national wealth than they are in the US (say hello to Arnault, Bettencourt, Pinault and the Wertheimers).

It works like that in Russia: say hello to the countless, directly state protected oligarchs. Threaten their interests, you die. Their approach is super simple.

It works like that in Italy and Spain, which are both dominated by old, large corporations and family interests. Which heavily explains their forever economic stagnation.

It even works like that in Switzerland: ever see how large their financial companies are in relation to the economy? Who do you think actually runs Switzerland? Their banks are comically outsized versus the size of the economy.

It completely works like that in Brazil and India.

It works like that across all of the Middle East, to a much greater degree than most anywhere else.

It works like that in second tier economies, including: Poland, Mexico, Argentina, Romania, Turkey, Thailand.

It works like that in poor countries, including: Vietnam, Indonesia, Philippines, Ukraine, South Africa, Pakistan, Bangladesh.

bodpoq 1325 days ago [-]
Hey now, don't bring Pakistan in to this. We are owned by the military, not some puny corporation. Our biggest industrialists/companies barely hit a few billion dollars in yearly revenue, and they do it by keeping their head down and staying quiet.
def_true_false 1324 days ago [-]
Sales of important companies have to be approved by regulators, e.g. "Reuters: EU regulators approve ArcelorMittal to buy Italian peer Ilva".

https://www.reuters.com/article/us-ilva-m-a-arcelormitta-eu/...

ngcc_hk 1325 days ago [-]
Yes. But are they successful. It is AMD that kick intel not the other way round both architecture x86 64 bit) and now through working with others (Taiwan which is not in your list) to break their hardware.

Might and reality is not right.

We need competition. We adore competition. And even the countries you quote many do have competition. You just do not innovate and of course one can try to stop and rest. But the works does not.

Be water not mountain my friend.

pjmlp 1324 days ago [-]
Without AMD I could be enjoying Itanium nowadays.
floatboth 1324 days ago [-]
Itanic was sinking from the start, it wouldn't be unlikely that Intel would've invented something like amd64 just a few years later if AMD didn't do the smart thing.
pjmlp 1324 days ago [-]
You can pick any colour as long as it is black.
MR4D 1325 days ago [-]
American company buys a British company from a Japanese company.

How exactly is this only like in the states?

phkahler 1324 days ago [-]
I dont even think this is about CPUs. This purchase is consolidation of two GPU companies. I've said it before, as Risc-V further commoditizes CPUs, the differentiator will be who has graphics. In that light, this is pure consolidation.
klelatti 1324 days ago [-]
I have a lot of respect for RISC-V but I really struggle with an unqualified statement that RISC-V "is" commoditising CPUs in a meaningful way when there are basically no mobile, desktop or server chipsets with a meaningful presence in the market. All that may change over the next 10 years but who knows? For now x86 is dominant on servers and Arm in mobile.

In reality Arm commoditised in many markets CPUs by making reasonable designs available to all at reasonable cost, keeping control over the ISA and allowing firms to innovate in their implementations. You can have the same code running on a Raspberry PI, an iPhone and a 64 core Graviton2 server.

The Nvidia takeover threatens all this by giving control to a firm who could well 'unlevel' the playing field and even refuse to offer the latest IP to competitors.

RISC-V may provide a way out for firms unhappy with Nvidia but it could be a bumpy path. And its certainly not the case that Nvidia are paying $40bn to consolidate Mali with their own graphics IP.

floatboth 1324 days ago [-]
Arm's Mali GPUs don't seem very important in the grand scheme of things. They're the choice for SoC makers who don't have anything else. Qualcomm has their own Adreno, Apple has their own thing, NXP buys GPUs from Vivante, Samsung is now going to include AMD Radeons in phones..
captainbland 1324 days ago [-]
Sure but you have to assume that at some point in the future that Nvidia is going to start making those non-Nvidia GPUs very difficult/expensive to use in future ARM designs. They could do this legally, technically through manipulating the reference designs to tightly integrate Nvidia GPUs, cutting support for third party GPU integrating SoCs or any of the above.
mihaaly 1325 days ago [-]
> You buy, you control, that's how the world works

NO! That is how the world does NOT work! We seen it trillion times, I am disappointed the least seeing this negligent 'argument' for blocking dominance! Shame!

welfare 1324 days ago [-]
How does the world work then?
Wowfunhappy 1325 days ago [-]
To play devil’s advocate a bit, are nVidia's incentives necessarily so different? Their goal will be to make as much money as possible, and it's clear that licensing has been a winning strategy for ARM.

Samsung comes to mind as another company that makes their own TVs, phones, SSDs, ect., but is also perfectly happy to license the underlying screens and chips in those products to other companies. From my vantage point, the setup seems to be working well?

DCKing 1325 days ago [-]
It could be Nvidia just wants ARM's profits and leave them alone, but I don't understand why Nvidia would spend 40 billion dollars on that. They spend 40 billion on the control of a company, why would they do that if they just would leave them be? Surely they want to exercise that control in some way for their own goals. Especially a company like Nvidia which (in my linked comment) has a proven track record of not understanding how to collaborate with others.

EDIT: Let's be clear that ARM's incentive last week was to create and empower as many competitors for Nvidia as possible. They were good at that and was the root of the success of the ARM ecosystem. That incentive is completely gone now.

I'm guessing Samsung has a track record where I'd feel a little more confidence in the situation if they'd taken over ARM here, but in general ARM's sale to Softbank and thereby its exposure to lesser competitive interests has been terrible. They could have remained a private company.

Wowfunhappy 1325 days ago [-]
> They spend 40 billion on the control of a company, why would they do that if they just would leave them be?

Well, the optimistic reason would be talent-share. nVidia has a lot of chip designers, and ARM has a lot of chip designers, and having all of them under one organization where they can share discoveries, research and ideas could benefit all of nVidia's products.

DCKing 1325 days ago [-]
All I can say to that is that I don't share your optimism. Explaining this as an acquihire sounds nice, but that seems a thin explanation of this purchase given the key role ARM has for many of Nvidia's competitors and the ludicrous amount of money Nvidia put on the table here. We'll see what the future holds - I certainly hope the open ecosystem can survive.
lagadu 1324 days ago [-]
I don't see it as much as Nvidia trying to strangle the competition that licenses ARM as much as I see it Nvidia having full control over the development of the architecture. This would enable them to own full solutions in their AI part of the business and tailor the architecture to their very specific needs.

Buying ARM not only allows them to control the direction of development, it also protects them should anyone else have bought it with hostile intent.

Personally I see this as more of an attack on Intel.

DCKing 1324 days ago [-]
I agree actually that Nvidia's strategic motivation here is their long-term position w.r.t. Intel and AMD, maybe even Apple/Huawei/Samsung. And that's actually a good reason for this investment!

I likewise don't think destroying the ARM ecosystem is Nvidia's primary objective here - far from it. It might not even be a secondary objective. But I do think the ARM ecosystem will be slowly torn apart either as an innocent bystander or as a less prominent secondary objective (given real-world complexities, probably some combination of both). When Nvidia buys ARM, and they explicitly buy them to improve their competitive position, there's only one direction where I see the incentives going. Those will be against the open ARM ecosystem, which is a breeding ground for big and small competition against Nvidia.

Even if Nvidia's primary motives are relatively benign, I think they'll inevitably create a sitation where the ARM ecosystem can't continue existing in its current form. That's where the real tragedy will be.

davrosthedalek 1325 days ago [-]
Or to make sure they can't be left in the dirt by somebody else. If Apple, for example, would have bought ARM, I could imagine them quite hostile to NVIDIA.
klelatti 1325 days ago [-]
How do we know that Samsung hasn't stifled some potential competitors by refusing to sell them screens or by selling them an inferior product?
baddox 1325 days ago [-]
Samsung makes (excellent) screens for iPhones, which are huge competitors to Samsung's own flagship phones, but Samsung still seems happy to take the profits from the screen sales. If there are smaller potential competitors that Samsung won't work with it's most likely because the scale is too small to be in their economic interest, not because they're rejecting profits in order to stifle potential competitors.
simias 1325 days ago [-]
iPhones sell massively well though, to the point where it would be a big loss if Apple went to another company for their screens. You just can't screw with a company the size of Apple without consequences. It's not like they'd go "oh no, no more iPhone now I guess :'(" if Samsung decided not to sell them screens anymore.

The problem is more with smaller companies that could be destroyed before they even get a chance to compete. Those can be bullied pretty easily by a company the size of Samsung.

baddox 1325 days ago [-]
I don’t really see how it works both ways. It’s hard to imagine a bigger scarier competitor to Samsung than the Apple iPhone, yet we agree Samsung is happy selling screens to Apple.
lagadu 1324 days ago [-]
I recall reading a breakdown at one point a few years ago that indicated that between screen, fabs (before Apple went for TSMC) and memory, Samsung made more money from each iphone sold than they did for each galaxy S phone of the time. Seems like a winning deal to me.
bavell 1325 days ago [-]
Because there are a lot more Androids and other devices that need screens. If they didn't make screens for Apple, a competitor would.
babypuncher 1325 days ago [-]
Apple is a much bigger company than Samsung, they can't realistically turn down a contract that big when LG is lying in wait to take all that money.
sfifs 1325 days ago [-]
Not really. Revenues and Profits of Samsung group and Apple are roughly comparable and in many ways Samsung's revenue streams are more diversified and sustainable.. stock valuation notwithstanding.
headsupftw 1325 days ago [-]
"stock valuation notwithstanding" Perhaps you were thinking of Samsung Electronics, not Samsung Group.
tooltalk 1325 days ago [-]
There isn't really much outside Samsung Electronics in terms of revenue/profit.
royroyroys 1324 days ago [-]
Samsung SDS made $635 million in profit in 2019 which I think was higher than ARM? As far as I'm aware the 'Samsung Group' doesn't actually have all the companies under that so it's difficult to see what the overall revenue/profit is of the entire 'group'. It has quite a few other companies, like heavy industries, insurance, asset management, etc and I think they all have thousands of employees and are very large companies in their industries so I'd be surprised that when combined the other companies weren't significant. Did you have any data on this?
ngcc_hk 1325 days ago [-]
That is the key. There are and need to have alternatives. Real one.

Not the one to rule them all is the key to every innovation.

tibbydudeza 1324 days ago [-]
Samsung is a massive company ... the division that handles screens is managed independently from the one that makes smartphones.

It must be probably weird for Apple technical staff to communicate so closely with one division of Samsung while fighting for market share with another division of the same company.

runeks 1325 days ago [-]
> Samsung makes (excellent) screens for iPhones, which are huge competitors to Samsung's own flagship phones, but Samsung still seems happy to take the profits from the screen sales.

What markup does Apple pay for Samsung OLED displays compared to Samsung’s other OLED customers? I think this is highly relevant if you want to use it as an example. Because if the markup for Apple is 5x that of other buyers of Samsung OLED displays then you certainly can’t say Samsung is “happy” to sell them to Apple.

Same for nVidia-owned-ARM: if they’re happy to sell ARM licenses at 5x the previous price, then that will surely increase sales for nVidia’s own chips. I guess my overall point is: a sufficiently high asking price is equivalent to a refusal to sell.

paulmd 1325 days ago [-]
demanding information that you know nobody will be able to produce is an unethical debating tactic.

obviously nobody but Samsung and their customers will know that information, and anyone who could reveal it is under NDA.

Apparently the prices are good enough that Apple doesn't go elsewhere.

hajile 1325 days ago [-]
Resources are limited. If a Samsung phone and a Motorola phone need the same screen and there's not enough to go around, what happens?

A bidding war of course.

On the surface, it's capitalism at work. In reality, Samsung winds up in a no-lose situation. If Motorola wins, Samsung gets bigger margins due to the battle. If Samsung wins, they play "pass around the money" with their accountants, but their only actual costs are those of production.

I'd note that chaebol wouldn't exist in a free market. They rely on corruption of the Korean government.

marcosdumay 1325 days ago [-]
Screen manufacture is a highly competitive market. ARM licensing isn't.
captainbland 1324 days ago [-]
The main difference I'd say is that ARM holds more of a potentially captive industry that is totally intertwined with its IP. Most of the products that Samsung sell to other industries are more or less fungible with products that other third parties sell but the ARM design rights are going to be incredibly difficult to replace. Entire ecosystems of electronics and software are built on it.

There's a whole lot of inertia for Nvidia to take advantage of here while the rest of the industry figures out where it's going.

starfallg 1325 days ago [-]
The reason that ARM is only worth 30-40 billion while powering nearly all smartphones and much of the embedded space is because their business model is not that profitable. If Nvidia is looking for profits, ARM is a bad investment. If they are looking for synergies to boost nVidia profits, it will be done at the expense of other licensees. That's exactly what people fear.
dv_dt 1325 days ago [-]
Didn't Samsung stopping selling flash chips to ouside parties when it suited them?
013a 1325 days ago [-]
I tend to agree, but there may be another angle to this which could prove beneficial to consumers. Right now, the only ARM chips which are actually competitive with desktop chips are from Apple, and are obviously very proprietary. If this acquisition enables Nvidia to begin producing ARM chips at the same level as Apple (somehow, who's to say how, that's on them) then that would help disrupt the AMD/Intel duopoly on Windows. Its been a decade; Qualcomm has had the time to try and compete here, and has failed miserably.

I doubt Nvidia would substantially disrupt or cancel licensing to the many third-rate chip designers you listed. But, if they can leverage this acquisition to build Windows/Linux CPUs that can actually compete with AMD and Intel, that would be a win for consumers. And Nvidia has shown interest in this in the past.

Yes, its a massive disruption to the status quo. But it may be a good one for consumers.

klelatti 1325 days ago [-]
But Nvidia has an Arm architecture license already - the same as Apple - so can build Arm chips to whatever design it wants (and it does in Tegra).

This is nothing to do with extending Nvidia's ability to use Arm IP in its own products.

013a 1325 days ago [-]
Nvidia is a total powerhouse when it comes to chip design. Most people here are looking at this from the angle of "how does ARM benefit Nvidia", but I think its more valuable to consider "how does Nvidia benefit ARM". In 2020, given what we know about Ampere, I really don't think there's another company out there with better expertise in microprocessor design (but, to be fair, lets say top 3 next to Apple and AMD). Now, they have more of the stack in-house, which may help produce better chips.

Yes, ARM mostly just does licensing, but it may turn out that this acquisition gives Nvidia positive influence over future ISA and fundamental design changes which emerge from their own experience building microprocessors.

Maybe that just benefits Nvidia, or maybe all of their licenses; I don't know. But, I think the high price of this acquisition should signal that Nvidia wants ARM for more than just collecting royalties (or, jesus, the people here who think they're going to cancel the licenses or something, that's a wild prediction).

The other important point is Mali, which has a very obvious and natural synergy with Nvidia's wheelhouse. Another example of Nvidia making ARM better; Nvidia is the leader in graphics, this is no argument, so their ability to positively influence Mali (whether by actually improving it, or replacing it with something GeForce) may be beneficial to the OEMs who use it.

DCKing 1325 days ago [-]
> Nvidia is a total powerhouse when it comes to chip design. Most people here are looking at this from the angle of "how does ARM benefit Nvidia", but I think its more valuable to consider "how does Nvidia benefit ARM". In 2020, given what we know about Ampere, I really don't think there's another company out there with better expertise in microprocessor design (but, to be fair, lets say top 3 next to Apple and AMD). Now, they have more of the stack in-house, which may help produce better chips.

In my view you have this completely backwards. I think the opposite is true and that Nvidia is not a powerhouse CPU designer at all. They make extremely impressive GPUs certainly, but that does not automatically translate to great capabilities in CPUs. In terms of CPUs they have so far either used standard ARM designs and have attempted their own Project Denver custom architecture which are not bad but have not impressed CPU wise either. In this area Nvidia would need ARM - primarily for themselves.

> The other important point is Mali, which has a very obvious and natural synergy with Nvidia's wheelhouse. Another example of Nvidia making ARM better; Nvidia is the leader in graphics, this is no argument, so their ability to positively influence Mali (whether by actually improving it, or replacing it with something GeForce) may be beneficial to the OEMs who use it.

I know you're only entertaining the thought, but the image of Nvidia shipping HDL designs of Geforce IP to Samsung or Mediatek in the short term future seems completely alien to me. Things would need to change drastically at Nvidia for them to ever do this.

Certainly Nvidia has the capabilities to sell way better graphics to the ARM ecosystem, and very likely only one line of GPUs can survive, but it just seems extremely unlike Nvidia to ever license Geforce IP to their competitors.

easde 1325 days ago [-]
Nvidia actually did try to license out its GPU IP a few years back: https://www.anandtech.com/show/7083/nvidia-to-license-kepler...

I don't believe they ever closed a deal, but clearly Nvidia had some interest in becoming an IP vendor. Perhaps the terms were too onerous or the price too high.

DCKing 1325 days ago [-]
Ah that's very interesting. I wonder what kind of circumstance would get in the way deals for a GPU line that (starting with Maxwell) is technically the best GPU architecture money can buy. Perhaps Nvidia were trying to win game console SoC deals?
ThrowawayR2 1325 days ago [-]
> "In 2020, given what we know about Ampere, I really don't think there's another company out there with better expertise in microprocessor design..."

Since Ampere and GPUs in general are structured nothing like a microprocessor, I doubt you'll find anyone who agrees with that.

smolder 1325 days ago [-]
AMDs answer to Ampere won't be shabby based on info about next generation consoles. They're also widening the gap on CPUs with Zen to where ARM won't have an easy time making inroads on server/workstation.

On the Intel side, the process obstacles have been tragic, but they have plenty of hot products and plenty of x86 market share to lose, or in other words, plenty of time to recover CPU performance dominance.

tibbydudeza 1324 days ago [-]
As Linus said ARM will only be widely adopted if there is availability of affordable hardware that developers can actually buy.

Apple has pulled this off about 4 times because of their small market share and willingness to deprecate old hardware, software and the close control of the hardware they release.|

In the PC world - x86 will remain with us for a LONG time to come.

klelatti 1324 days ago [-]
Which is why Arm in Macs is a crucial moment for the whole of the Arm ecosystem.
013a 1325 days ago [-]
I hope so. And when AMD's graphics cards and Intel's processors become good again, they're welcome to reclaim a top spot. But, until then, they are woefully behind.
DesiLurker 1325 days ago [-]
perhaps they can be part of an 'early insider' program to get access to the next gen architecture improvements. and use that to steer towards integrating their own gpus for a premium instead of the dinky little Malis.
ZuLuuuuuu 1323 days ago [-]
The failed "Windows on Snapdragon" experiment is not because of the fault of Qualcomm but Microsoft's. Qualcomm consistently released chips for PCs since Snapdragon 835 and the chips are pretty competitive in raw power compared to Intel. But even some of Microsoft's own software is not compiled natively for ARM leading to bad performance. No 64-Bit emulation means a lot of programs users download does not work at all. .Net Framework which is probably the most popular .Net framework for Windows programs is not compiled for ARM either so people who want to compile their apps need to first transition to .Net Core, and this wasn't even an option a year ago (because of lack of missing parts in .Net Core). So Microsoft is to blame for the handling of the Windows on Snapdragon experiment.
pier25 1325 days ago [-]
> There is no way Nvidia will spend 40 billion on this without them deliberately or inadvertently destroy ARM's open CPU and GPU ecosystem

But why would a company spend that much money to buy a company and destroy it afterwards?

DCKing 1325 days ago [-]
Nvidia will certainly try to get as much money out of ARM's R&D capabilities, existing IP, and future roadmap as they can. They will get their money's worth - at worst they will fail trying. In that sense, they won't destroy "ARM the company" or "ARM the IP". But Nvidia will have no interest in maintaining ARM's business model whereby ARM fosters a community of Nvidia competitors - they have an interest to the opposite. Therefore they very likely will destroy "ARM the ecosystem".
bwanab 1325 days ago [-]
It’s not going to destroy the business it’s going to destroy the current business model. The point is that the only way this deal can make sense for Nvidia, is to use ARM’s IP as a competitive advantage over other competitors. Until now, ARM’s value proposition has been IP neutrality for the various user companies.
batmansmk 1325 days ago [-]
Oracle and Sun at $8B.
gorbachev 1324 days ago [-]
They figure they can make more than $40B by driving ARM to the ground?

Personally I don't think that's true in this particular case, but the strategy isn't exactly unheard of.

melbourne_mat 1325 days ago [-]
I think it's wonderful news that Arm is joining the ranks of giant American tech oligopolies. This is a win for freedom and increases prosperity for all.

/s

ngcc_hk 1325 days ago [-]
Both Apple and windows has arm implementation. In fact with Apple it is arm not risc/v we are looking at. Can Nvidia bring competition or instead destroy arm ... wonder if he wants to destroy what a cpu ip or competition it pay 40B to destroy. What does it want from the deal ? And what will it get from the deal ? Any past deal we can gauge ? Oracle buy sun is the worst scenario.
edderly 1325 days ago [-]
I agree with the general sentiment here, but ARM is not exactly Snow White. It's an open secret that ARM was (and still is) selling the CPU design at a discount if you integrated their Mali (GPU). This isn't relevant to Nvidia today, but it was when they were in the mobile GPU space. Also this caused obvious problems for IMGtec and other smaller GPU players like Vivante.
Followerer 1325 days ago [-]
" It's an open secret that ARM was (and still is) selling the CPU design at a discount if you integrated their Mali (GPU)."

Why is that bad? Not only it's common business practice (the more you buy from us, the cheaper we sell), it also makes sense from the support perspective. Support the integration between their cores and a different GPU would be more work for them than integration of their cores with their own GPUs.

That's why companies expand to adjacent markets: efficiency.

A completely different thing would be to say: "if you want our latest AXX core, you have to buy our latest Mali GPU". That's bundling, and that's illegal.

johannes1234321 1324 days ago [-]
A question is how big those discounts are.

Microsoft have away a free browser with their operating system - leaving little room for other browser vendors to serve that market.

Each ARM design deal including a GPU for cheap leaves little room for other GPU vendors.

hajile 1325 days ago [-]
Bundling isn't necessarily anti-competitive provided ARM isn't taking a loss selling their chip. I'll admit that things aren't actually free-market here because copyright and patent monopolies apply.

There are three possibilities here: ARM's design is approximately the same as the competitory, ARM's design is inferior to the competitor, and ARM's design is superior to the competitor.

If faced with two equivalent products, staying with the same supplier for both is best (especially in this case where the IP isn't supply-limited). The discount means a reduction in costs to make the device. Instead of ARM making a larger profit, their customers keep more of their money. In turn, the super-competitive smartphone market means those savings will directly go to customers.

In cases where ARM's design is superior, why would they bundle? If they did, getting a superior product at an even lower price once again just means less money going to the big corporation and more money that stays in the consumer's pocket.

The final case is where ARM has an inferior design. I want to sell the most performance/features for the price so I can sell more phones. I have 2 choices: slight discount on the CPU but bundled with an inferior GPU or full price for the CPU and full price for a superior GPU. The first option lowers phone price. The second option offers better features and performance. For the high-end market, I'm definitely not going with the discount because peak performance reigns supreme. In the lesser markets, its a calculation of price for total performance and the risk that consumers might prefer an extra few FPS for the cost of another few dollars.

Finally, there are a couple small players like Vivante or Imagination Technologies, but the remaining competitors in the space (Intel, AMD, Nvidia, Qualcomm, Samsung, etc) aren't going to be driven under by bundle deals, so bundling seems to be pretty much all upside for consumers who stand to save money as a result.

DCKing 1325 days ago [-]
I'm sure that ARM is not a saint here in the sense that they would also have an incentive to milk their licensees as much as possible. Now they will keep having that incentive, but also the terrible incentive to actively outcompete their licensees which is much worse.
ksec 1325 days ago [-]
>I agree with the general sentiment here, but ARM is not exactly Snow White. It's an open secret that ARM was (and still is) selling the CPU design at a discount if you integrated their Mali (GPU).

It is actually the other way around. ARM is more like giving the Mali GPU for free ( or at a very low cost ) if you use their CPU.

>Also this caused obvious problems for IMGtec

Yes, part of the reason why PowerVR couldn't get more traction and Apple were unhappy with their GPU pricing.

ngcc_hk 1325 days ago [-]
Tie-in sale is anti competition and not allow ? May be the per sec is started to go but still ...
klelatti 1325 days ago [-]
Bundling is not necessarily problematic: it happens at your local supermarket all the time!

What would be an issue would be if Arm used their market power in CPUs to try to control the GPU market - e.g. you can't have the latest CPU unless you buy a Mali GPU with it.

spfzero 1323 days ago [-]
Exactly, there is no way ARM is worth 40 B to Invidia, unless they are going to use it to arm-twist their competition.

Just look at ARMs annual net, multiply by 10, multiply that by 2 assuming starry-eyed optimism about you being better at generating value from ARM IP, you’re still far from 40 billion.

ksec 1325 days ago [-]
>Will Nvidia allow selling ARM licenses to competitors of Nvidia's business?

Like AMD? Sure. None of the ARM IP compete with Nvidia much. Not to mention by "Not" Selling to AMD it create more problem for its $40B asset than anyone could imagine.

>Will Nvidia reserve ARM's best IP as a selling point for its own chips? Will Nvidia allow Mali to continue existing?

Sure. Mali dont compete with Nvidia at all. Unless Nvidia will put up their CUDA Core for IP licensing with similar price and terms to Mali. Could they kill it or raise the price of Mali? Sure. But there is always PowerVR. Not to mention AMD is also licensing out Radeon IP to Mobile. Mostly because AMD dont / cant compete in that segment.

>Unless Nvidia leaves ARM alone (and why would they spend $40B on that??)

It has more to do with Softbank being an Investor. They were already heavily invested in Nvidia. And they need money, they want out. And seriously no one sane would buy ARM for $40B ( It is actually $35B, with $5B as performance bonus, the number $40 was likely used only for headline. ) As a matter of fact I would not be surprised if Softbank promise to buy it back someday. This also paint a picture of how desperate Son / Softbank needs those Cash. So something is very wrong. ( Cough WeWork Cough )

But I do understand your point. Conflict of Interest. Similar to Apple wouldn't want to build their Chip in Samsung Foundry.

While I would have liked ARM to remain independent. I am not as pessimistic as some have commented. And normally I am the one who had people pointing at my pessimism.

On the optimistic side of things. There are quite a lot of cost could be shared with the tools used for TSMC and Samsung Foundry implementation. ( Nvidia is now in bed with Samsung Foundry ) For ARM that means higher margin, for its customers that mean access to Samsung Foundry Capacity where previously they are stuck with TSMC. Nvidia also gets to leverage ARM's expertise in licensing, so their Nvidia GPU could theoretically enter new market. The real IP with Nvidia isn't so much about the GPU design, but its Drivers and CUDA. So may be Nvidia could work towards being an Apple like "Software" company that works with specific Hardware. ( Pure Speculation only )

There are lots of talk about Nvidia and ARM. While I dont think the marriage make perfect sense, It is not all bad. There are more interesting point no one is talking about. Marvell, AMD, Xillix and may be Broadcom. The industry is consolidating rapidly because designing leading edge chip, even with the cheap IP licensing is now becoming very expensive. And the four mentioned above have less leverage than their competitors.

Interest Times.

oldschoolrobot 1325 days ago [-]
exactly
Blammar 1326 days ago [-]
No one has seemed to notice the following two things:

"To pave the way for the deal, SoftBank reversed an earlier decision to strip out an internet-of-things business from Arm and transfer it to a new company under its control. That would have stripped Arm of what was meant to be the high-growth engine that would power it into a 5G-connected future. One person said that SoftBank made the decision because it would have put it in conflict with commitments made to the U.K. over Arm, which were agreed at the time of the 2016 deal to appease the government." (from https://arstechnica.com/gadgets/2020/09/nvidia-reportedly-to... )

and

"The transaction does not include Arm’s IoT Services Group." (nvidia news.)

which appear to contradict each other.

I'm not sure about the significance of this. I would have guessed Nvidia would have wanted the IoT group to remain.

Also, to first order, when a company issues stock to purchase another corporation, that cost is essentially "free" since the value of the corporation increases.

In other words, Nvidia is essentially paying $12 billion in cash for ARM up front, and that's all. (The extra $5B in cash or stock depends on financial performance of ARM, and thus is a second-order effect.)

manquer 1326 days ago [-]
It is not "free", it means current shareholders of Nvidia are paying for the remaining money. Their stock is diluted on fresh issue of shares.[1]

The $12B comes from Nvidia the company, the remaining money comes from Nvidia's shareholders directly.

[1] Only if the valuation of ARM is "worth it" the fresh issue of shares will not cost the current shareholders anything. This is rarely the case , if Nvida overvalued(or less likely undervalued) the deal then current shareholders are giving more than they got for it.

bananaface 1326 days ago [-]
And note that the $12B is also owned by Nvidia's shareholders (they own the company which owns the cash), so they're paying for that too.

It's just different forms of shareholder assets being traded for other assets, by the shareholders (or rather, their majority vote).

manquer 1325 days ago [-]
Effectively all of it is funded by the shareholders who "own" the corporation.

In the current example if the cash is coming from a debt instrument is it not bank funding it now?

it is typically about who is fronting the money now, it could be your bank making loans, or from cash reserves you have, fresh stock issue, selling another asset, or even the target's bank as in the case of LBO.

The shareholders always end up paying for it eventually in some form or other. Differentiating it by the current source helps understand the deal structure and risks better.

pottertheotter 1325 days ago [-]
It's still the shareholders even if debt financing is used. The shareholders, through the company, have to pay off the debt. It increases the risk of bankruptcy.

When doing a deal, it comes down to price and source of funds. Changing either can drastically change how good or bad the deal is.

bananaface 1325 days ago [-]
Absolutely. I was just noting it for anyone reading.
finiteloops 1325 days ago [-]
To me, "paying for" and "diluted" connotates negative emotions.

Cash paid in this instance is treated no different than cash in their normal operating expenses. If either generates profits in line with their current expected returns, the stock price stays the same, and everyone is indifferent to the transaction.

Same goes for stock issuance. If the expectation of the use of proceeds from the issuance are in line with the company's current expected returns, everyone is indifferent.

Your statement is still true, and the stock market jumped today on the news, so I feel my connotation is misplaced.

pc86 1325 days ago [-]
> If either generates profits in line with their current expected returns, the stock price stays the same

This is a pretty naive depiction of Wall Street.

haten 1326 days ago [-]
Yes
walterbell 1326 days ago [-]
There were two separate IoT business units: Platform (https://pelion.com) and Data (https://www.treasuredata.com/). The Platform unit fits the Segars post-acquisition comment about end-to-end IoT software architecture, https://news.ycombinator.com/item?id=24465005

> One person close to the talks said that Nvidia would make commitments to the UK government over Arm’s future in Britain, where opposition politicians have recently insisted that any potential deal must safeguard British jobs.

So the deal has already been influenced by one regulator. That should encourage other regulators.

> SoftBank will remain committed to Arm’s long-term success through its ownership stake in NVIDIA, expected to be under 10 percent.

Why is this stake necessary?

oxfordmale 1326 days ago [-]
The UK government never learns. Kraft made similar promises when buying Cadbury regarding jobs,but quietly reneged on them over time. The Kraft CEO was asked to show up before an UK parliament committee, but of course declined, and that was the end of the story.
djmobley 1326 days ago [-]
Except the Cadbury-Kraft debacle led to major reforms in how the UK regulates foreign takeovers.

In the case of Arm, the guarantees provided back in 2016 were legally binding, which is why we’re here, four years and another acquisition later, with Nvidia now eager to demonstrate it is standing by those commitments.

Maybe in this particular instance they did learn something?

bencollier49 1326 days ago [-]
They should learn to prevent the sale, period. Promises to retain some jobs (for ever? What happens if profits decline? What happens if the core tech is sold and ARM becomes a shell? Do the rules still apply?) address a tiny fraction of the problems presented by the sale of one of our core national tech companies.
tomalpha 1326 days ago [-]
I would have liked to see ARM remain owned in the UK. I think it's proven itself capable of innovation and organic growth on its own.

But how can we evaluate the whether that will continue?

What if ARM is not sold, and then (for whatever reason) stagnates, doesn't innovate, gets overtaken in some way, and enters gradual decline?

Perhaps that's unlikely, but prevent the sale, period is feels too absolute.

djmobley 1325 days ago [-]
Who is to say ARM was “owned in the UK” prior to the SoftBank acquisition?

Prior to that it was a publicly traded company, presumably with a diverse array of international shareholders.

bencollier49 1326 days ago [-]
That feels like giving up, to me. We should have the confidence that British industry can development and flourish on its own merits without being sold off to foreign interests.
fluffything 1326 days ago [-]
What have confidence when we can just look at ARM financials?

There are more ARM chips sold each year than those of all its competitors together. Yet ARM's revenue is 300 million $.

Why? Because ARM lives from the ISA royalties, and their revenue on the cores they license is actually small.

With RISC-V on the rise, and west sanctions against china, RISC-V competition against ARM will only increase, and it is very hard to compete against something that's good / better and has lower costs (RISC-V royalties are "free").

I really have no idea why NVIDIA would adquire ARM. If they want a world-class CPU team for the data-center, ARM isn't that (Graviton, Apple Silicon, Fujitsu, etc. are built and designed by better teams). ARM cores are used by Qualcom and Samsung, but these aren't world-class and get beaten every gen by Apple Silicon. If they want ARM royalties, that's high risk business, and very low reward (there is little money to make there).

The only ok-ish cores ARM makes are embedded low-power cores (not mobile, but truly IoT < 1W embedded). Hard to imagine that an architecture like Volta or Ampere that perform well at 200-400W would perform well at the <1W envelope. No mobile phone in the world uses nvidia accelerators, and mobile phones are "supercomputers" when compared with the kind of devices ARM is "ok-ish" at.

So none of this makes sense to me, except if NVIDIA would want to "license" GPUs with ARM cores to IoT and low power devices like ARM does, but that sounds extremely far-fetched, because nvidia is super-far away from a product there, and also because the margins for those products are very very thin, and nvidia tends to like 40-60% margins. You just can have those when buying IoT chips for 0.12$. Its also hard to sell a GPU to these use cases because they often don't need it.

als0 1325 days ago [-]
> If they want a world-class CPU team for the data-center, ARM isn't that (Graviton

Graviton uses Neoverse CPU cores, which are designed by ARM. To say that ARM is not a world-class CPU team is unfair. Especially as Ampere just announced an 80 core datacenter SoC using Neoverse cores.

1325 days ago [-]
Followerer 1325 days ago [-]
The main source of revenue for ARM is, by far, royalties. Licenses are paid once, royalties are paid by unit shipped. And they shipped billions last year.

Revenue is not $300, we don't know what ARM's revenue is because it hasn't been published since 2016. And back then it was like $1.5 billion. $300 million was net income. Again, in 2016.

I think you've already been adequately corrected on your misconceptions about ARM's CPU design teams.

lumberingjack 1325 days ago [-]
Apple mobile beating arm mobile is pretty irrelevant considering they can barely run the same type of tasks it's a miracle they both have the same benchmarks since they're operating systems are totally different. Beyond that it doesn't even really matter which one's faster because you got all that overhead of the os's that are so different. I would also like to say that Apple users are going to buy the newer chip whether it's faster or not. Because of planned obstinance at a software level. I've never heard an Apple user talk about speed or The benchmark of the phones because that marketing segment is clueless to that. My guess is that Nvidia needs arm for their self-driving stuff.
bogomipz 1325 days ago [-]
>" If they want a world-class CPU team for the data-center, ARM isn't that (Graviton, Apple Silicon, Fujitsu, etc. are built and designed by better teams)."

The latest Fujitsus HPC offering the A64FX is also ARM based though.[1][2] And it sounds as though this is replacing their SPARC64 in this role .

[1] https://en.wikipedia.org/wiki/Fujitsu_A64FX

[2] https://en.wikipedia.org/wiki/Fugaku_(supercomputer)

1325 days ago [-]
floatboth 1324 days ago [-]
Fujitsu is using custom cores, but AWS Graviton (1 and 2) actually uses Arm Neoverse cores. The Ampere Altra will too.
formerly_proven 1325 days ago [-]
The core is not designed by ARM.
oxfordmale 1326 days ago [-]
Nvidia could still asset strip ARM, and then let ARM decline organically with redundancies justified by the decrease in revenue.
lumberingjack 1325 days ago [-]
Sounds like with all those new regulations it will be a limiting factor for new jobs not going to set up my company there if I got to jump through hoops like that.
bencollier49 1326 days ago [-]
It absolutely outrages me. I've posted elsewhere on the thread about this, but I want to do something to prevent this sort of thing in future. Setting up a think tank seems like a viable idea. See my other response in the thread if you are interested and want to get in contact.
lumberingjack 1325 days ago [-]
Welcome to vulture capitalism it's been gutting American industry for about 10 20 years used to be four or five paper mills around here but now they're all in China and everyone who worked there is on drugs now
fencepost 1325 days ago [-]
Welcome to vulture capitalism it's been gutting American industry for about 10 20 years

More like 30-40. https://en.wikipedia.org/wiki/Private_equity_in_the_1980s

1326 days ago [-]
Torkel 1326 days ago [-]
Get the deals in writing, with explicit liabilities if contract is broken. There, I fixed it.
bencollier49 1326 days ago [-]
What, and the profits, investments and growth made by the company will stay inside the country? There's a much broader problem here.
sgt101 1325 days ago [-]
What's this "quietly reneged over time"! Kraft simply welshed on the deal instantly! There were explosions and a change in the law.
LoSboccacc 1326 days ago [-]
it's all posturing unless the government makes the company to agree to punitive severance packages to be put in an escrow account to be released to the company after 20 years or to the people if fired before that.
boulos 1326 days ago [-]
> Why is this stake necessary?

Edit: it’s not necessary/a requirement.

They’re noting that after the transaction, SoftBank will still fall under the 10% ownership threshold that requires more reporting from the SEC [1]:

> Section 16 of the Exchange Act applies to an SEC reporting company's directors and officers, as well as shareholders who own more than 10% of a class of the company's equity securities registered under the Exchange Act. The rules under Section 16 require these “insiders” to report most of their transactions involving the company's equity securities to the SEC within two business days on Forms 3, 4 or 5.

[1] https://www.sec.gov/smallbusiness/goingpublic/officersanddir...

xbmcuser 1326 days ago [-]
Are they getting 10% of nvidia or keeping 10% of Arm
btown 1326 days ago [-]
From the press release directly, it appears to be 10% of NVIDIA:

https://nvidianews.nvidia.com/news/nvidia-to-acquire-arm-for...

> Under the terms of the transaction, which has been approved by the boards of directors of NVIDIA, SBG and Arm, NVIDIA will pay to SoftBank a total of $21.5 billion in NVIDIA common stock and $12 billion in cash, which includes $2 billion payable at signing. The number of NVIDIA shares to be issued at closing is 44.3 million, determined using the average closing price of NVIDIA common stock for the last 30 trading days. Additionally, SoftBank may receive up to $5 billion in cash or common stock under an earn-out construct, subject to satisfaction of specific financial performance targets by Arm.

Since NVIDIA currently has 617 million shares outstanding, if the earn-out were to be fully in common stock, this would bring Softbank to 8.8% of NVIDIA from this transaction alone (plus anything they already have in NVIDIA as public-market investors).

(I believe that the Forbes analysis in the sibling comment is mistaken in describing this as a "10% stake in new entity" - no new entity is mentioned in the press release itself.)

walterbell 1326 days ago [-]
> no new entity is mentioned in the press release itself

The Forbes article is based on a joint interview today with the CEOs of Arm and Nvidia, who could have provided more detail than the press release, specifically:

> Arm operating structure: Arm will operate as an NVIDIA division

This level of detail can be confirmed during the analyst call on Monday. Operating as a separate division would help assuage concerns about Arm's independence. The press release says:

> Arm will remain headquartered in Cambridge ... Arm’s intellectual property will remain registered in the U.K.

Those statements are both consistent with Arm operating as a UK-domiciled business that is owned by Nvidia.

inopinatus 1325 days ago [-]
Given the quality of the reporting so far it’s possible that they are getting 10% of the UK.
walterbell 1326 days ago [-]
10% of Arm division of Nvidia, https://www.forbes.com/sites/patrickmoorhead/2020/09/13/its-....

> Softbank ownership: Will keep 10% stake in new entity

That would mean Nvidia acquired 90% of Arm for $40B, i.e. Arm was valued at $44B.

Is Softbank invested in Arm licensees, who may benefit from Softbank influence? Alternately, which Arm licensees would bid in a future auction of Softbank's 10% stake of Nvidia's Arm division?

nautilus12 1325 days ago [-]
Does this mean they are or aren't buying treasure data? What would happen if not?
walterbell 1325 days ago [-]
That would be up to Softbank.
nautilus12 1325 days ago [-]
So they aren't buying treasure data then?
walterbell 1325 days ago [-]
Looks like they continue as independent company, owned by Softbank, https://blog.treasuredata.com/blog/2020/09/14/nvidia-to-acqu...

> The transaction does not include Arm’s IoT Services Group, which is made up of Treasure Data and Arm’s IoT Platform business.

> Back in July, Arm announced a proposed transfer to separate Treasure Data from its core semiconductor IP licensing business, enabling Treasure Data to operate as an independent business. That separation is on track and will be completed before the close of the NVIDIA transaction. Most importantly, you should not see any disruption in our service or support as a result of this news.

pathseeker 1326 days ago [-]
>Also, to first order, when a company issues stock to purchase another corporation, that cost is essentially "free" since the value of the corporation increases.

This isn't correct. If investors thing Nvidia overpaid, its share price will decline. There are many examples of acquiring companies losing significant value on announcements to buy other companies even in pure stock deals.

siberianbear 1326 days ago [-]
In addition, it's not even a valid argument if the cost was entirely in cash.

One would be making the argument, "the cost is essentially free because although we spent $40B, we acquired a company worth $40B". Obviously, that's not any more correct than the case of paying in stock.

HenryBemis 1326 days ago [-]
Correct, if it's shares, they just dilluted the value of the stock. If I owned 1% of the company, after printing and handing out $40bn worth of stock, then I keep the same amount of stock, but now it's (e.g.) 0.5%, which means I just lost 50% of that investment. Which means I will receive 50% of the dividend (since out of the next year's dividend 'pool' someone will get a big chunk. Which means NVidia just screwed the existing shareholders (for the time being, and once the numbers/$ are merged I should be getting my dividend by two parts, one from ARM and one from NVIDIA).

If they gave away cash, that's a different story, it all depends.. if they were sitting on $1tn cash, and the spent $40bn, that's no biggie. I mean we went through COVID, what worse can come next?

kd5bjo 1326 days ago [-]
> If I owned 1% of the company, after printing and handing out $40bn worth of stock, then I keep the same amount of stock, but now it's (e.g.) 0.5%, which means I just lost 50% of that investment. Which means I will receive 50% of the dividend

Well, you’re now getting 50% of the dividend produced by the new combined entity. If the deal was correctly priced, your share of the Arm dividend should exactly replace the portion of your Nvidia dividend that you lost through dilution.

throwaway5792 1325 days ago [-]
That's not how equity works. Paying cash or shares, the end result is exactly the same for shareholders barring perhaps some tax implications.
fortran77 1325 days ago [-]
I pay the car dealer $20,000, I get a car worth $20,000. Was the car free?
jaywalk 1325 days ago [-]
Technically, yes. Your net worth is unchanged. The problem is that the car is no longer worth $20,000 as soon as you drive it off the lot.
tonitosou 1325 days ago [-]
actually as soon as u get the car its not a 20k$ car anymore.
fauigerzigerk 1326 days ago [-]
I agree that paying in new shares doesn't make the acquisition free. But the value of shares is an expectation of future cash flows. If that expectation is currently on the high side for Nvidia and on the low side for Arm then paying in shares makes the acquisition cheaper for Nvidia than paying cash.

Nvidia is essentially telling us that they think their shares are currently richly valued. I agree with that.

bananaface 1326 days ago [-]
Not necessarily. They could just prefer to be diversified. It can be rational to take a loss of value in the pursuit of diversification (depends on the portfolios of the stakeholders).

They could also think their shares are valued accurately but believe the benefits of synthesis would increase the value.

fauigerzigerk 1325 days ago [-]
How would any of that make payment in shares beneficial compared to cash payment?
bananaface 1325 days ago [-]
$12B cash appears to be all they have available. They can't pay more cash.

https://www.marketwatch.com/investing/stock/nvda/financials/...

Their only other option would be taking on debt.

fauigerzigerk 1325 days ago [-]
Right, so Nvidia with their A2 rating and their solid balance sheet in a rock bottom interest rate environment still found it favourable to purchase Arm with newly issued shares.

That's telling us something about what they think about their share valuation right now.

bananaface 1324 days ago [-]
That's not how interest works. "Low rate environments" are low rate because debt is less attractive than in a high rate environment. Debt isn't automatically preferable when interest rates are low - if it were, rates would rise. There's no free lunch.

All it tells you is that they think it's preferable to taking on debt, which in some sense is the position you always start from. Debt has a deadweight cost that you have to overcome.

But we can test your theory. You're saying they thought their shares were overvalued. The market's reaction was the opposite - announcing the deal bumped their share price 7.5%.

xxpor 1325 days ago [-]
Perhaps there are tax implications?
delfinom 1324 days ago [-]
>If investors thing Nvidia overpaid, its share price will decline

You realize that logic no longer matters in this economy. There's an oversupply of printed money and stonks literally only going up.

smabie 1326 days ago [-]
It's rare for the acquiring companies stock to not decline, regardless of whether the market thinks they overpaid.
inlined 1326 days ago [-]
I worked at Parse when it was bought by Facebook. The day the news broke, Facebook’s market cap grew by multiples of the acquisition price. I remember being gobsmacked that Facebook effectively got paid to buy us.
rocqua 1326 days ago [-]
Unless Facebook itself actually issued stock at the new price, Facebook did not get paid. It were the Facebook shareholders that got paid.

Really, it shows that the market valued Parse much more than the cash it cost Facebook. If Parse was bought with stock instead of cash, that's almost cooler. Since it allowed Parse to capture more of the surplus value they created. (Since the stock price popped).

smabie 1325 days ago [-]
More sophisticated analysis is required. You should look at the beta of Facebook at the time and determine whether the market reacted positively to the acquisition, or if the move was inline with Facebook's exposure to the market
jannes 1325 days ago [-]
Keep in mind that market cap is a fictional aggregate. It does not represent the real price that all shares could be sold for.
fishermanbill 1325 days ago [-]
When will Europe realise that there is no second place when it comes to a market - the larger player will always eventually end up owning everything.

I can not put into words how furious I am at the UK's Conservative party for not protecting our last great tech company.

Europe has been fooled into the USA's ultra free market system (which works brilliantly for the US but is terrible for everybody else). As such American tech companies have brought EVERYTHING and eventually moth balled them.

Take Renderware it was the leading game engine of the PS2 era consoles, brought by EA and mothballed. Nokia is another great example brought by Microsoft and mothballed. Imagination Technologies was slightly different in that it wasn't bought but Apple essentially mothballed them. Now ARM will undoubtedly be the next via an intermediate buyout.

You look across Europe and there is nothing. Deepmind could have been a great European tech company - it just needed the right investment.

Barrin92 1325 days ago [-]
tech is only 10% of the US economy, and European nations are much more reliant on free-trade. Germany in particular, whose exports constitute almost 47% of their GDP, globally only comparable to South Korea for a developed nation.

I get that Hackernews is dominated by people working in software and software news, but as a part of the real economy (and not the stock market) it's actually not that large and Europe doesn't frame trade policy around it, for good reasons.

The US also doesn't support free-trade for economic reasons, but for political and historical reasons, which is to maintain a rule based alliance across the globe, traditionally to fend off the Soviets. Because they aren't around any more, the US is starting to ditch it. The US has never economically benefited from free-trade, it's one of the most insular nations on the planet. EU-Asia trade with a volume of 1.5 trillion almost doubles EU-American trade, tendency increasing, and that's why Europe is free-trade dependent.

fishermanbill 1324 days ago [-]
Tech sadly is everything hence why it dominates the stock market. Information is the new oil and all that. Its much more than purely economic, whoever controls the tech controls those who use the tech. Hence why China has created its own tech companies - they arent stupid.

I think you're also getting mixed up between 'free-trade' and 'free-markets'. Free trade is about trade deals: NAFTA, WTO, EU, CPTPP, Mercour or whatever trade grouping you want - generally to do with the removal of taxes and standardisation of goods between countries.

Free markets on the other hand is do with the liberalisation of markets i.e removing government intervention (as much as possible) i.e regulations and restrictions of buying and selling of stuff - in this case companies (which can be covered in a trade deal admittedly)

What I'm advocating is that British gov (and most European gov's) restricts the selling of their tech companies based purely on the importance of the tech company.

Why?

Because as I say its do to with control. We're not able to make democratic, sovereign decisions when the fabric of how most things are done is controlled completely by someone else.

KptMarchewa 1325 days ago [-]
Only because of EU. Intra-EU trade is more comparable to trade between US states then true international trade.
mantap 1324 days ago [-]
I'd say the opposite, intra-EU trade is more like international trade, at least for B2C. Each country has its own national market situation, companies cannot easily expand to the entire EU because in every EU country they will find different local competitors who know the local market much better than they do. Every product has to be localised for the local language and culture. All marketing has to be localised.

Despite efforts to the contrary the EU functions as a glorified free trade zone, half a century of integration cannot beat 1000 years of fragmentation.

fishermanbill 1324 days ago [-]
Just to be clear my statement has nothing to do the 'EU' which is largely a trade body. I specifically used the term 'Europe' and 'free markets' not 'free trade'. This has nothing to do with Brexit to avoid confusion.
reader_mode 1324 days ago [-]
Not true at all - EU does make things simpler but it's still very different legal systems, currencies and even languages.
emptyfile 1324 days ago [-]
Why? What's the difference between USA-Mexico trade in car parts and intra-EU trade in car parts?
alfalfasprout 1325 days ago [-]
And you really think more protectionism will help?

Maybe part of the problem is that due to so many regulations, there's not a healthy startup ecosystem and the compensation isn't remotely high enough to draw the best talent.

mytherin 1325 days ago [-]
Regulation has little to do with it. Most of the tech industry is inherently winner-takes-all or winner-takes-most because of how easy it is to scale up tech solutions. US companies get a huge head-start because of their large home market compared to the fragmented EU market, and can easily carry that advantage into also dominating the EU market.

There is a reason Russia and China have strong tech companies and Europe doesn’t. That reason isn’t lack of money, lack of talent or regulations. The only way for Europe to get big tech companies is by removing or crippling big US companies so EU companies can actually compete. The US companies would be quickly replaced by EU alternatives and those would offer high compensation all the same.

Whether or not that is worth it from the perspective of the EU is not so black and white - tech is obviously not everything - but the current situation where all EU data gets handed to the US government on a silver platter is also far from optimal from the perspective of the EU.

fishermanbill 1324 days ago [-]
Absolutely. The only bit I don't agree with is that tech isn't everything - to all intense and purposes it is (at this point in time).
smabie 1325 days ago [-]
The only way for Europe to get good tech companies is to create a business friendly environment. They seem unwilling to do that.
ngcc_hk 1325 days ago [-]
When is the time china and Russia has a lead in IT by not protectionist and copycat. I can only think of tik tok.

The strange thing is if you do not count brexit, Arm is one of the many example Uk can do it. And whilst we say Nokia, Sieman and japan fuji (sitting in Hosiptal now and thinking those mri, ...) non-chinese and non-Russia do dominate the tech world even they are not USA. But communist ideology totalitarian I found tik tok is really the exception.

Hence I think Eu has their problem. But not because they are not as good as Russia or china.

guorzhen 1324 days ago [-]
Tmall of Alibaba processed 544,000 transactions per second during the peak of its Singles' Day in 2019. I believe this has set a new world record for an e-commerce platform.

In case you didn't know about Singles' Day: https://graphics.reuters.com/SINGLES-DAY-ALIBABA/0100B30E24T...

DiogenesKynikos 1324 days ago [-]
Huawei. They lead in 5G equipment because they did the necessary R&D investment.

Their phones are pretty good, too (or were pretty good before they got cut off from their suppliers). Their edge was that they built really great cameras into their phones.

fishermanbill 1324 days ago [-]
Yes the US is all about protectionism, same goes for China, same goes for the EU (also note my use of Europe, EU != Europe). Hence why all three major trade blocks dont have complete free trade deals setup between them (not that comment has anything to do with free trade deals - at least not primarily).

Ultimately free trade is where the world would like to get to purely from an economic basis but you have to do that in tandem with the rest of the world. If you go first everybody else has a economic advantage over you as possibly the UK will find out after Brexit actually happens. Also politics gets in the way of the world achieving full free trade. Some gov will always want votes by protecting an industry - like the UK's fishing industry for instance.

dijit 1324 days ago [-]
There are a lot of tech companies that start and are successful in Europe. They just get bought when they get big enough and killed later.

Skype is another example to add to the list.

Skype is effectively dead technology and isn’t even promoted any more.

AsyncAwait 1325 days ago [-]
> Maybe part of the problem is that due to so many regulations, there's not a healthy startup ecosystem

Reaganomics talking point since the 80s, yet the U.S. constantly relaxes regulations, recently it released even more environmental ones and it looks in parts like Mars.

But of course, cut regulations, cut corporate taxes, cut benefits, cut, cut, cut. There's never a failure model for such capitalism apparently. 2008 even was blamed on regulation, rather than lack of thereof.

Am quite frankly done with this line of argument.

quadrifoliate 1325 days ago [-]
I know what you are referring to in the US, but that doesn't explain the intra-Europe differences in regulation when it comes to doing business. Like Denmark is number 4, Germany is number 22, and Greece is number 79 on this index? [1] I don't think Denmark (or Norway, at number 9) is exactly an unregulated capitalist paradise where the environment is being destroyed in the name of business.

I say this because I think Europe with a set of consistent regulations and ways of establishing business would serve as a good counterweight to the freewheeling, "anything goes" nature of US capitalism. But I think the fragmentation is its Achilles Heel.

------------------------

[1] https://www.doingbusiness.org/en/rankings?region=oecd-high-i...

geon 1325 days ago [-]
I don’t really see your point. Even the examples you list are nonsensical.

* We are no longer in the ps2 era. EA now uses Frostbite, which was developed by the Swedish studio Dice. It is alive and well, powering some 40-50 games. https://en.m.wikipedia.org/wiki/Frostbite_(game_engine)

* Nokia was dead well before MS bought them.

ksec 1325 days ago [-]
I sort or agree but those aren't exactly great example.

I dont see how Renderware would compete with Unreal. Even their owner EA choose Unreal. They were great in PS2 era, but next Gen console ( PS3 ) they were not.

Nokia was dead even before Stephen Elop became the CEO. So the Microsoft acquisition has nothing to do with it.

IMG - Yes. But I would argue they were dead either way. They couldn't get more GPU licensing due to ARM's Mali being cheap and good enough. They couldn't expand into other IP licensing areas. Their MIPS acquisition was 5 years too late. Their wireless part couldn't compete with CEVA. And they somehow didn't sell themselves to Apple as an Exit. ( But then Apple lied about not using IMG's IP. While Steve Jobs often put a spin thing, I find Time Cook's Apple quite often just flat out lying )

fishermanbill 1324 days ago [-]
I'm a game rendering engineer that uses Unreal (and used Renderware) so I know a little about this subject.

If Renderware hadn't been brought by EA (and hence controlled by your competitor), the rest of the industry would probably have kept using Renderware as it was the best option and development would have continued. It would have been built on to deliver next gen experiences.

It mirrors pretty much perfectly what is wrong with the ARM Nvidia deal.

Nokia yes wasn't doing well in the smart phone sector but was doing excellently in the feature phone sector. Hence why HMD Global is now doing very well selling those handsets.

nitrobeast 1325 days ago [-]
And if Europe develop a company that threatens US leading tech / surveillance companies like Facebook / Google, or becomes the leader of the next tech wave, be prepared for US government actions to take it down. See: Japanese semi conductor industry in the 80s, Alstim, Bombardier, Tiktok.
bitL 1325 days ago [-]
One could argue that's what happened to NOKIA - the only non-US company controlling the largest growing market at the time. For anyone saying that it was dead already, you don't understand that despite minimal presence on the US market it controlled over 60% of the world's market and that could have lasted much longer due to loyalty of customers and upcoming MeeGo (that even US reviewers liked) if it wasn't Eloped.
read_if_gay_ 1324 days ago [-]
Everyone I know here in Europe thought Nokia was dead when MS bought it.
bitL 1321 days ago [-]
Yes, when they bought it it was already dead, but I was talking about the time they were #1 and just got their new ex-MS CEO with a $20M bonus in case he sold the company. Then he proceeded to dismantle all that worked.
ifmpx 1325 days ago [-]
It doesn't even have to be the US government itself doing this; US companies are becoming large and strong enough to do as they please without regard to what lesser organizations and even countries have to say.

Perhaps governments around the world should do what the US did (and still does) to foreign companies before its too late.

ineedasername 1325 days ago [-]
They have yet to do this with rare earth metals, which represent a huge strategic threat to both the US and much of the rest of the world to rely on China for a vast majority of the supply.
nitrobeast 1325 days ago [-]
It has been said that other countries, including US, in fact have more rare earth deposits. If they choose to relax environmental laws and disregard patents held by Chinese corporations, more rare earth would be produced.
ineedasername 1324 days ago [-]
Yes, other countries have them. They're not willing to produce them at the higher cost required to obey environmental laws. I'm not sure how patent laws are a huge issue here: rare earths may have increasing market demand, but they've been extracted and processed for countless years. Only innovations China has made in the last ~20 years would be covered by patents. I'm sure there have been some innovations, but I'm also sure our other mining industries in the US would be happy to leverage their patent portfolio in a patent war if China wasn't willing to work on reasonable licensing terms.

A global strategic bottleneck in China for these things doesn't require patent or environmental law violations, it just requires us to pay more for them. For a critically strategic resource like this, the US should ensure a consistent supply chain independent of geopolitical concerns with China. And if China were to cut off supply then concern for their patents goes out the window too.

Speednet 1325 days ago [-]
Wait - it was OK when ARM was owned by a Japanese company, but suddenly bad when a US company buys them? Being anti-US is not a legitimate point. If anything, nVidia will create new market opportunities and expand existing ones for ARM. They have already said they will be expanding their UK presence. Maybe don't react so emotionally next time.
fishermanbill 1323 days ago [-]
No it wasn't OK when Softbank bought them, I vehemently disagreed with the sale then, as this was the next step after Softbank acquired them.

Although I dont agree with selling to Softbank at least they didn't have a dog in the game. What is bad about Nvidia is that they have a dog in the (chip) game. A major reason you went to ARM was for a non biased design team - you knew you were getting their best if you paid for it. Now I'm afraid you don't.

Also my comment is not anti US, it just so happens that the US has all the big tech companies and foolishly the European countries, especially the UK believes it can compete in a level playing field with the US even though the US's GDP is about 9 times bigger than the UK's - god knows how much bigger its equities markets are.

At the EU level the US is not but then this kind of stuff isn't decided about at the EU level - maybe it should be, not that that will help the UK see the error in its ways after Christmas.

As for emotional reaction with the greatest of respect did you read your message before you posted it?

swiley 1325 days ago [-]
ARM was second place in laptops for quite a while (arguably still.)
mas3god 1324 days ago [-]
If Europe doesnt like American business they can make their own company or use open source... Risc V is a great alternative to arm anyways.
rland 1325 days ago [-]
Bad for the US as well.

A monoculture is bad. Any monoculture.

chakintosh 1324 days ago [-]
DICE, the Swedish game developer, bought by EA and is now belly up
auggierose 1325 days ago [-]
brought = bought ?
fishermanbill 1324 days ago [-]
lol yes and in the next line you see I used 'bought' just a simple typo.
monoclechris 1324 days ago [-]
I can not put into words how ill-informed you are.
hobby-coder-guy 1325 days ago [-]
Brought?
zdw 1326 days ago [-]
I see this going a few ways for different players:

The perpetual architecture license folks that make their own cores like Apple, Samsung, Qualcomm, and Fujitsu (I think they needed this for the A64FX, right?) will be fine, and may just fork off on the ARMv8.3 spec, adding a few instructions here or there. Apple especially will be fine as they can get code into LLVM for whatever "Apple Silicon" evolves into over time.

The smaller vendors that license core designs (like the A5x and A7x series, etc.) like Allwinner, Rockchip, and Broadcom are probably in a worse state - nVidia could cut them off from any new designs. I'd be scrambling for an alternative if I were any of these companies.

Long term, it really depends on how nVidia acts - they could release low end cores with no license fees to try to fend off RISC-V, but that hasn't been overly successful when tried earlier with the SPARC and Power architectures. Best case scenario, they keep all the perpetual architecture people happy and architecturally coherent, and release some interesting datacenter chips, leaving the low end (and low margin) to 3rd parties.

Hopefully they'll also try to mend fences with the open source community, or at least avoid repeating past offenses.

AnthonyMouse 1326 days ago [-]
> The perpetual architecture license folks that make their own cores like Apple, Samsung, Qualcomm, and Fujitsu (I think they needed this for the A64FX, right?) will be fine

There is one thing they would need to worry about though, which is that if the rest of the market moves to RISC-V or x64 or whatever else, it's not implausible that someone might at some point make a processor which is superior to the ones those companies make in-house. If it's the same architecture, you just buy them or license the design and put them in your devices. If it's not, you're stuck between suffering an architecture transition that your competitors have already put behind them or sticking with your uncompetitive in-house designs using the old architecture that nobody else wants anymore.

Their best move might be to forget about the architecture license and make the switch to something else with the rest of the market.

zdw 1326 days ago [-]
> Their best move might be to forget about the architecture license and make the switch to something else with the rest of the market.

This assumes that there isn't some other factor in transitioning architecture - this argument could boil down in the mid 2000's to "Why not go x86/amd64", but you couldn't buy a license to that easily (would need to be 3-way with Intel/AMD to further complicate things)

Apple has done quite well with their ARM license, outperforming the rest of the mobile form factor CPU market by a considerable margin. I don't doubt that they could transition - they've done it successfully 3 times already, even before the current ARM transition.

Apple under Cook has said they want to "to own and control the primary technologies behind the products we make". I doubt they'd turn away from that now to become dependent on an outside technology, especially given how deep their pockets are.

thomasjudge 1325 days ago [-]
It's kind of puzzling that Apple didn't buy them. They don't seem to be particularly aggressive/creative in the M&A department
ed25519FUUU 1325 days ago [-]
They have a perpetual license. What value is it for them to buy ARM?
ineedasername 1325 days ago [-]
Many of Apple's direct competitors are ARM customers, and US politics is already turning a large anti-trust eye towards FAANG. That might have been a factor.
andoriyu 1325 days ago [-]
Apple buying arm would never get approved. They have way to many direct competitors that heavily rely on SoCs that has arm cores.
fluffything 1326 days ago [-]
Apple could transition in 10 years to RISC-V, just like how they transitioned 10 years ago to x86, 10 years before to PPC, 10 years before to .........
spullara 1326 days ago [-]
They did dump ZFS when they decided they didn't like the licensing terms.
AnthonyMouse 1325 days ago [-]
> Apple has done quite well with their ARM license, outperforming the rest of the mobile form factor CPU market by a considerable margin.

But that has really nothing to do with the architecture. They just spent more money on CPU R&D than their ARM competitors. They could have done the same thing with RISC-V, and if that's where the rest of the industry is going, they could be better off going there too. Especially for Mac, since they're about to do a transition anyway. They could benefit from putting it off for another year while they change the target architecture to something the rest of the market might not be expected to avoid in the future.

There is also no guarantee that their success is permanent. They might have done a better job than Qualcomm this year, but what happens tomorrow, if Google throws their hat into the ring, or AMD makes a strong play for the mobile market, or Intel gets their heads out of their butts, or China decides they want the crown and gives a design team an unlimited budget? There is value in the ability to switch easily in the event that someone else is the king of the mountain for a while.

> I don't doubt that they could transition - they've done it successfully 3 times already, even before the current ARM transition.

Just because they can do it doesn't mean it's free.

Really if Intel was shrewd, they'd recognize that they've lost Apple's business already and just sell them an x86 license, under some terms that Intel would care about and Apple wouldn't, like they can only put the chips in their own devices. Then Apple could save themselves the transition entirely and do another refresh with processors from Intel while they put their CPU team to task redesigning their own chips to be x86_64. It would give both Apple and Intel a chance to throw a punch at Nvidia (which neither of them like) while helping both of them. Apple by avoiding the Mac transition cost and Intel by maintaining the market share of their processor architecture and earning them whatever money they get from Apple for the license.

And it gives Intel a chance to win Apple's business back. All they have to do is design a better CPU than Apple does in-house, and Apple could start shipping them again without having to switch architectures. Which is also of value to Apple because it allows them to do just that if Intel does produce a better CPU than they do at any point in the future.

ed25519FUUU 1325 days ago [-]
The problem I see with your argument is that it ignores their superior mobile ARM SoC. If they are going to unify on an architecture, it would certainly be easier to migrate Mac to ARM than iOS to x86.
AnthonyMouse 1324 days ago [-]
Your assumption is that the architecture matters and they couldn't build an x86 SoC which is just as good, but they could.

And you don't want to move to ARM if everybody else is moving away from it.

kllrnohj 1326 days ago [-]
Samsung, Qualcomm, and MediaTek all currently just use off the shelf A5x & A7x cores in their SoCs. Unless that part of the company is losing money I don't expect nVidia to cut that off. Especially since that's likely a key part of why nVidia acquired ARM in the first place - I can't imagine they care about the Mali team(s) or IP.
klelatti 1326 days ago [-]
What happens if/when Nvidia launches SoCs that compete with Qualcomm and MediaTek. Will it continue to offer the latest cores to competitors when it will make a lot more money on its own SoCs? This is the reason for the widepsread concern about Nvidia owning Arm.
kllrnohj 1325 days ago [-]
I don't know if Nvidia is eager to re-enter the SoC market. It wouldn't be a clear path to more money, since they would need to then handle modem, wifi, ISP, display, etc... instead of just CPU & GPU. And they'd need to work with Android & its HALs. And all the dozens/hundreds of device OEMs.

They could, but that's more than just an easy money grab. Something Nvidia would already be familiar with from Tegra.

What seems more likely/risky would be nvidia starts charging a premium for a Mali replacement, or begins sandbagging Mali on the lower end of things. But Qualcomm already has Adreno to defend against that.

entropicdrifter 1325 days ago [-]
Looks like Nvidia never left the SoC market to begin with.

The latest Tegra SoC launched March of 2020.

kllrnohj 1325 days ago [-]
They released a new SBC aimed at autonomous vehicles. They haven't had a mobile SoC since 2015's Tegra X1. Which only made it into the Pixel C tablet, Nintendo Shield, and Nvidia's Shield TV (including the 2019 refresh)
riotnrrd 1325 days ago [-]
You're forgetting the TX2 as well as the Jetson Nano
kllrnohj 1325 days ago [-]
Which ended up in what mobile products?
ksec 1325 days ago [-]
Mediatek are in the lower end market, something Nvidia's culture doesn't like competing in. Qualcomm holds the key to Modem. Which means Nvidia competing with Qualcomm wont work that well. Not to mention they have already tried that with Icera and generally speaking Mobile Phone SoC are low margin business. ( Comparatively Speaking )
klelatti 1325 days ago [-]
Completely take your point on Qualcomm.

On mobile SoC margins I guess that margins are low because there is a lot of competition - start cutting off IP to the competition and margins will rise.

I suspect that their focus will be on the server / automotive to start off with but the very fact that they can to any of this is troubling for me.

wyldfire 1326 days ago [-]
Qualcomm used to design their own cores up until the last generation or two, but you're right they use the reference designs now.

EDIT: correction, make that the last generation or four (oops, time flies)

kllrnohj 1326 days ago [-]
Qualcomm hasn't used an in-house core design since 2015 with the original Kryo. Everything Kryo 2xx and newer are based on Cortex.
awill 1326 days ago [-]
That was a really sad time honestly. QCOMM went from leading the pack to basically using reference designs (which they still arrogantly brand as Kryo despite essentially being a tweak of a reference design.

It all happened because Apple came out with the first 64-bit design, and QCOMM wasn't ready. Rather than deliver 32-bit for 1 more year, they used an off the shelf ARM 64-bit design (A57) in their SoC called the Snapdragon 810, and boy was it terrible.

himinlomax 1326 days ago [-]
From what I gathered, they made at least _some_ risky architecture choices in their custom architecture that turned out not to be sustainable over the next generations. Also note that their Cortex core is indeed customized to a significant extent.
Followerer 1325 days ago [-]
"and may just fork off on the ARMv8.3 spec, adding a few instructions here or there"

No, they may not. People keep suggesting these kinds of things, but part of the license agreement is that you can't modify the ISA. Only ARM can do that.

kortex 1325 days ago [-]
Well, regardless of whether this amendment is kosher or not, AMX definitely exists. Perhaps the $2T tech behemoth was able to work out a sweetheart deal with the $40B semiconductor company.

> There’s been a lot of confusion as to what this means, as until now it hadn’t been widely known that Arm architecture licensees were allowed to extend their ISA with custom instructions. We weren’t able to get any confirmation from either Apple or Arm on the matter, but one thing that is clear is that Apple isn’t publicly exposing these new instructions to developers, and they’re not included in Apple’s public compilers. We do know, however, that Apple internally does have compilers available for it, and libraries such as the Acclerate.framework seem to be able to take advantage of AMX. [0]

my123's instruction names leads to a very shallow rabbit hole on google, which turns up a similar list [1]

Agreed upon: ['amxclr', 'amxextrx', 'amxextry', 'amxfma16', 'amxfma32', 'amxfma64', 'amxfms16', 'amxfms32', 'amxfms64', 'amxgenlut', 'amxldx', 'amxldy', 'amxldz', 'amxldzi', 'amxmac16', 'amxmatfp', 'amxmatint', 'amxset', 'amxstx', 'amxsty', 'amxstz', 'amxstzi', 'amxvecfp', 'amxvecint']

my123 also has ['amxextrh', 'amxextrv'].

[0] https://www.anandtech.com/show/14892/the-apple-iphone-11-pro....

[1] https://www.realworldtech.com/forum/?threadid=187087&curpost...

my123 1325 days ago [-]
That's untrue.

(famously so, Intel used to ship arm chips with WMMX and Apple for example ships their CPU today with the AMX AI acceleration extension)

rrss 1325 days ago [-]
WMMX was exposed via the ARM coprocessor mechanism (so it was permitted by the architecture). The coprocessor stuff was removed in ARMv8.
my123 1325 days ago [-]
Now custom instructions are directly on the regular instruction space...

(+ there's the can of worms of target-specific MSRs being writable from user-space, Apple does this as part of APRR to flip the JIT region from RW- to R-X and vice-versa without going through a trip to the kernel. That also has the advantage that the state is modifiable per-thread)

Followerer 1325 days ago [-]
In ARMv8 you have a much cleaner mechanism through system registers(MSR/MRS).
saagarjha 1325 days ago [-]
Apple has been using system registers for years already. AMX is interesting because it's actual instruction encodings that are unused by the spec.
Followerer 1325 days ago [-]
That's like saying that my Intel CPU comes with an NVIDA Turing AI acceleration extension. The instructions the CPU can run on an Apple ARM-based CPU is all ARM ISA. That's in the license arrangement, if you fail to pass ARM's compliance tests (which include not adding your own instructions, or modifying the ones included) you can't use ARM's license.

Please, stop spreading nonsense. All of this is public knowledge.

my123 1325 days ago [-]
No. I reverse-engineered it and AMX on the Apple A13 is an instruction set extension running on the main CPU core.

The Neural Engine is a completely separate hardware block, and you have good reasons to have such an extension available on the CPU directly, to reduce latency for short-running tasks.

rrss 1325 days ago [-]
Are the AMX instructions available in EL0?

Is it possible AMX is implemented with the implementation-defined system registers and aliases of SYS/SYSL in the encoding space reserved for implementation-defined system instructions? Do you have the encodings for the AMX instructions?

my123 1325 days ago [-]
AMX instructions are available in EL0 yes, and are used by CoreML and Accelerate.framework.

A sample instruction: 20 12 20 00... which doesn't in any stretch parse as a valid arm64 instruction in the Arm specification.

Edit: Some other AMX combinations off-hand:

00 10 20 00

21 12 20 00

20 12 20 00

40 10 20 00

rrss 1325 days ago [-]
very interesting, thanks!
Followerer 1325 days ago [-]
The AMX is an accelerator block... If you concluded otherwise, your reverse-engineering skills are not great...

Let me repeat this: part of the ARM architectural license says that you can't modify the ISA. You have to implement a whole subset (the manual says what's mandatory and what's optional), and only that. This is, as I've been saying, public knowledge. This is how it works. And there are very good reasons for this, like avoiding fragmentation and losing control of their own ISA.

And once again, stop spreading misinformation.

my123 1325 days ago [-]
Hello,

Specifically about the Apple case,

After your tone, not certainly obligated to answer but will write one quickly...

Apple A13 adds AMX, a set of (mostly) AI acceleration instructions that are also useful for matrix math in general. The AMX configuration happens at the level of the AMX_CONFIG_EL1/EL12/EL2/EL21 registers, with AMX_STATE_T_EL1 and AMX_CONTEXT_EL1 being also present.

The list of instructions is at https://pastebin.ubuntu.com/p/xZmmVF7tS8/ (didn't bother to document it publicly at least at this point).

Hopefully that clears things up a bit,

And please don't ever do this again, thank you. (this also doesn't comply with the guidelines)

-- a member of the checkra1n team

eklavya 1325 days ago [-]
You may be correct, but do you really have to be so attacking?
btian 1325 days ago [-]
Can you provide a link to the "public knowledge" for those who don't know?
mxcrossb 1326 days ago [-]
It seems to me that if Apple felt that Nvidia would limit them, they could have outbid them for ARM! So I think you are correct.
dannyw 1326 days ago [-]
Apple would not get antitrust approval (iPhone maker controls all android chips????). So that’s why.
macintux 1326 days ago [-]
Were it a serious enough threat to their ARM license, they'd find a way to buy ARM and keep it independent.
kabacha 1326 days ago [-]
Exactly, Apple is already straddling the line (and imho way past it) on anti-comp laws.
kzrdude 1326 days ago [-]
I'm not defending Apple, just thinking that can't we say this for many of the biggest tech firms? They are way past the line on anticompetitive business.
wongarsu 1325 days ago [-]
Yes, Apple is not alone in this. Google is another example, and they are very aware of this and are acting very carefully
jonhohle 1326 days ago [-]
What is their monopoly or which of their competitors are they colluding with?
tick_tock_tick 1326 days ago [-]
Neither of those are required to violate anti-trust laws.
millstone 1326 days ago [-]
I think Apple is not committed to ARM at all. Bitcode, Rosetta 2, "Apple Silicon" - it all suggests they want to keep ISA flexibility.
headmelted 1326 days ago [-]
Exactly. Apple’s strategy here is very clear:

Offer customers iOS apps and games on the next MacBook as a straight swap for Boot Camp and Parallels. Once they’ve moved everyone over to their own chips and brought back Rosetta and U/Bs they’re essentially free to replace whatever they like at the architecture level.

In their reveal I noticed that they only mentioned ARM binaries running in virtual environments. It makes sense if you don’t want to commit to supporting GNU tools natively on your devices (as it would mean sticking with an established ISA)

saagarjha 1325 days ago [-]
I would be quite surprised if LLVM ever lost the ability to compile C for the platform.
headmelted 1325 days ago [-]
Exactly my point though.

Apple is large enough that if they want to break from ARM in the future they can do so by forking an LLVM backend. That's not a large job if it's a very small change, but once it's been done once they have plenty of resources to provide ongoing support (like they do for webkit).

The dividends of doing so are potentially massive. Given that they've been able to make really large gains with the A series chips to date on mobile (not least because they've been able to offload tasks to dedicated co-processors that general-purpose ARM cores don't ship with), the rewards for having chips that are a generation ahead of other like-for-like computers will outweigh the cost of maintaining the software.

emn13 1326 days ago [-]
Wow, but that cost - it's not a small thing to transition ISA, and don't forget that this transition is one of the simpler ones (more registers, fairly few devices). The risks of transitioning everything away from arm would be much greater.

I guess they have some ISA flexibility (which is remarkable). But not much; each transition was still a very special set of circumstances and a huge hurdle, I'm sure.

rocqua 1326 days ago [-]
At the low-level driver interface, transitioning ISA is a big deal. But I would guess that, at higher levels, most of the work is just changing the target of your compiler?

As in, most of the work occurs in the low-level parts of the Operating system. After that the OS should abstract the differences away from User-space.

emn13 1325 days ago [-]
No way; not at all.

First of all: there's lots of software that's not the OS. The OS is the easy bit: everything else: grindy, grindy horrorstory. A lot of that code will be third-party. And if you think, "hey, we'll just recompile!", and you can actually get them to too - well, good luck, but performance will be abysmal in many cases. Lots and lots of libraries have hand-tuned code for specific architectures. Anything with vectorization - despite compilers being much better than they used to be - may see huge bog downs without hand tuning. That's not just speculation; you can look at software that's gets the vectorization treatment or was ported to arm from x86 poorly - perfomance falls off a cliff.

Then there's the JITs and interpreters, of which there are quite a few, and they're often hyper-tuned to the ISA's they run on. Also, they can't afford to run something like LLVM on every bit of output; that's way too slow. So even non-vectorized code suffers (you can look at some of the .net core ARM developments to get a feel for this, but the same goes for JS/Java etc). Webbrowsers are hyper-tuned. regexengines, packet filters, etc etc etc

Not to mention: just getting a compiler like LLVM to support a new ISA as optimally as x86 or ARM isn't a small feat.

Finally: at least at this point, until our AI overloads render that redundant - all this work takes expertise, but that expertise takes training, which isn't that easy on an ISA without hardware. That's why Apple's current transition is so easy: they already have the hardware; and the trained experts some with over a decade of experience on that ISA!. But if they really want to go their own route... well, that's tricky, because what are all those engineers going to play around on to learn how it works; what's fast, and what's bad?

All in all, it's no coincidence transitions like this take a long time, and that's for simple (aka well-prepared) transitions like the one's Apple's doing now. Saying they have ISA "flexibility", like they're somehow interchangeable is completely missing the point on how tricky on those details are, and how much they're going to matter on how achievable such a transition is. Apple doesn't have general ISA flexibility, it has a costly route from specifically x86 to specifically ARM, and nothing else.

simias 1325 days ago [-]
Extremely aggressive optimizations are really special though, and they tend to require rewrites when new CPU extensions release (and workarounds to work on older hardware). If you rely on super low level ultra-aggressive micro optimizations your code is going to have a relatively short shelf life, different ISA or not.

The vast majority of the code written for any given computer or smartphone doesn't have this level of sophistication and optimization though. I'd wager that for most code just changing the build target will indeed mostly just work.

It won't be painless but modern code tends to be so high level and abstracted (especially on smartphones) than the underlying ISA matters a lot less than in the past.

emn13 1323 days ago [-]
That's not my experience. Plain old SIMD code ages fairly well - but if you port to a new arch, you will need to look at it again.

This isn't a question of precise instruction timing, it's a question of compilers being pretty bad at leveraging SIMD in general, even in 2020. Also, while I'm sure lots of projects have hand-tuned assembly, even higher level stuff like intrinsics help a lot, and need manual porting.

spronkey 1324 days ago [-]
Absolutely this - even some aggressively optimised stuff like game engines don't have insurmountable amounts of micro optimisations in them.

Also - why would Apple massively up and change uArchs? Even if they did decide to turn Apple Silicon into not-ARM, I'd wager it would look a lot more like ARM than for example, x86 does.

pas 1326 days ago [-]
Do it more and more and they'll have the tools to efficiently manage them.

Also likely the small tweaks they will want from time to time should be "easy" to follow internally, if you can orchestrate everything from top to bottom and back.

scarface74 1326 days ago [-]
I doubt Apple is dumb enough not to have basically a perpetual license for ARM.
soapdog 1326 days ago [-]
ARM was launched as a joint venture between Acorn, Apple, and VLSI. I believe that since day 0 Apple had perpetual access to the IP.
ksec 1325 days ago [-]
They sold all of the ARM shares in mid 90s to prevent themselves from going bankruptcy.

Not to mention starting a JV has nothing to do with perpetual IP access. You will still have to pay for it.

selectodude 1325 days ago [-]
They can certainly have a contract with Arm that allows them to renew their arch license in perpetuity that nvidia won't be able to void.

I obviously don't know that for sure but the idea that Apple would stake their future on something they don't have a legal ironclad position seems unlikely.

systemvoltage 1326 days ago [-]
I would also agree, the thing is - businesses breakup and come together all the time. If it makes sense and both parties can agree despite of past disagrements and lawsuits, they will partner.

Just because Apple and nVidia has bad relationship at the moment regarding their GPUs is probably orthogonal to what they'll do with this new branch of nVidia, that is ARM.

scarface74 1326 days ago [-]
What need does Apple have with ARMs R&D going forward? They have their own chip designers, build tools, etc.?

True about frenemies the entire time that Apple was suing Samsung, it was using Samsung to manufacture many of its components.

pas 1326 days ago [-]
But if your chip heavily builds on arm's IP you need a license for that at least as long as you can't replace the infringing parts of the design. Which sounds very much impossible if you also want to progress on other aspects of having the best chips.
spacedcowboy 1325 days ago [-]
Apple uses the ARM ISA. It doesn't use ARM IP - as in, the design of the chip. Apple designed their own damn chip!

Since they're not branding it as ARM in any way, shape, or form, and they have a perpetual architectural license to the ISA, I suspect they could do pretty much what they please - as long as they don't call it ARM. Which they don't.

pas 1325 days ago [-]
During the design were they careful not to create any derivative works of arm IP and/or not to infringe on any of arm's patents?
spacedcowboy 1324 days ago [-]
Apple co-founded the company. I imagine that they have a right to that IP.

Even if not, then yes, I would imagine that a behemoth of IP design like Apple would probably have considered IP rights during that design...

floatboth 1324 days ago [-]
They definitely call it ARM in developer tools and technical documents.
scarface74 1323 days ago [-]
I doubt that will be true going forward. They first used the “Apple Silicon” branding for the Mac and yesterday they used the branding for the Watch and iPad (?)
delfinom 1324 days ago [-]
>Just because Apple and nVidia has bad relationship at the moment regarding their GPUs is probably orthogonal to what they'll do with this new branch of nVidia, that is ARM.

Yea, they got fucked by Nvidia business practices multiple times. There's a saying about shame on you, shame on me, etc. Unless the entire Nvidia business unit also gets replaced in the same transaction, it doesn't matter how much of a faux separation of concerns they want to market.

kevin_b_er 1325 days ago [-]
https://www.electronicsweekly.com/news/business/finance/arm-...

Broadcom has an architectural license. They do also license core designs.

Really if nVidia locks up the lower end cores, then a lot of stuff breaks. Billions of super tiny ARM cores are everywhere. ARM has few competitors in the instruction set space for low end, low power, low cost cores. AVR, PIC, and MIPS are what come to mind. And AVR/PIC are owned by Microchip corporation.

These ARM chip unit licenses are dirt cheap, there's hundreds of small manufacturers, and their chips go in everything, and in unexpected places. And these aren't just little microprocessors anymore. They're even in SoCs as little coprocessors that manage separate hardware components in realtime.

The amount of penetration ARM has in hidden places cannot be underestimated. And there isn't a quick replacement for them. Not one freely licensed to any manufacturer.

himinlomax 1326 days ago [-]
> nVidia could cut them off from any new designs

Why would they do that anyway? The downsides are obvious (immediate loss of revenue), the risks are huge (antitrust litigation, big boost to RiscV or even Mips), the possible benefits are nebulous.

Those who are most obviously at risk are designers of mobile GPUs (Broadcom, PowerVR ...).

ATsch 1325 days ago [-]
If they do it that directly, sure. But on a large enough (time)scale, incentives are the only thing that matters. And they'll certainly think hard about putting their fancy new research that would help a competitor into their openly licensed chips from now on.
mbajkowski 1325 days ago [-]
Curious as I don't know the terms of a perpetual architectural ARM license. But, is it valid only for a specific architecture, say v8 or v9, or is it valid for all future architectures as well? Or is it one of those things, where it depends per licensee and how they negotiated?
hastradamus 1325 days ago [-]
Nvidia mend. Lol
ChuckMcM 1326 days ago [-]
And there you have it. Perhaps the greatest thing to happen to RISC-V since the invention of the FPGA :-).

I never liked Softbank owning it, but hey someone has to.

Regarding the federal investment in FOSS thread that was here perhaps CPU architecture would be a good candidate.

dragontamer 1326 days ago [-]
RISC-V still seems too ad-hoc to me, and really new. Hard to say where it'd go for now.

I know momentum is currently towards ARM over POWER, but... OpenPOWER is certainly a thing, and has IBM / Red Hat support. IBM may be expensive, but they already were proven "fair partners" in the OpenPOWER initiative and largely supportive of OSS / Free Software.

ChuckMcM 1326 days ago [-]
I would love OpenPOWER to succeed. I just don't see the 48 pin QFP version that costs < $1 and powers billions of gizmos. For me the ARM ecosystem's biggest win is that it scales from really small (M0/M0+) to really usefully big (A78) and has many points between those two architectures.

I don't see OpenPOWER going there, but I can easily see RISC-V going there. So, for the moment, that is the horse I'm betting on.

dragontamer 1326 days ago [-]
Not quite 48-pin QFP chips, but 257-pin embedded is still smaller than Rasp. Pi. (Just searched what NXP's newest Power-chip is, and its a S32R274: 2MB 257-pin BGA. Definitely "embedded" size, but not as small as Cortex-M0)

To be honest, I don't think that NVidia/ARM will screw over their Cortex-M0 or Cortex-M0+ customers over. I'm more worried about the higher-end, whether or not NVidia will "play nice" with its bigger rivals (Apple, Intel, AMD) in the datacenter.

exikyut 1326 days ago [-]
The FS32R274VCK2VMM appears to be the cheapest in this series; Digi-Key have it for $30, NXP has it for "$13 @ 10K". This is for a 200MHz part.

https://www.nxp.com/part/FS32R274VCK2VMM

https://www.digikey.com/product-detail/en/nxp-usa-inc/FS32R2...

The two related devkits list for $529 and $4,123: https://www.digikey.com/products/en/development-boards-kits-...

--

Those processors make quite a few reference to an "e200", which I think is the CPU architecture. I discovered that Digi-Key lists quite a few variants of this under Core Processor; and checking the datasheets of some random results suggests that they are indeed Power architecture parts.

https://www.digikey.com/products/en/integrated-circuits-ics/...

The cheapest option appears to be the $2.67@1000, up-to-48MHz SPC560D40L1B3E0X with 256KB ECC RAM.

Selecting everything >100MHz finds the $7.10@1000 SPC560D40L1B3E0X, an up-to-120MHz part that adds 1MB flash (128KB ECC RAM).

Restricting to >=200MHz finds the $13.32@500 SPC5742PK1AMLQ9R has which has dual cores at 200MHz, 384KB ECC RAM and 2.5MB flash, and notes core lock-step.

--

After discovering the purpose of the "view prices at" field, the landscape changes somewhat.

https://www.digikey.com/products/en/integrated-circuits-ics/...

The SPC574S64E3CEFAR (https://www.st.com/resource/en/datasheet/spc574s64e3.pdf) is 140MHz, has 1.5MB code + 64KB data flash and 96KB+32KB data RAM, and is available for $14.61 per 1ea.

The SPC5744PFK1AMLQ9 (https://www.nxp.com/docs/en/data-sheet/MPC5744P.pdf) is $20.55@1, 200MHz, 2.5MB ECC flash, 384KB ECC RAM, and has two cores that support lockstep.

The MPC5125YVN400 (https://www.nxp.com/docs/en/product-brief/MPC5125PB.pdf) is $29.72@1, 400MHz, supports DDR2@200MHz (only has 32KB onboard (S)RAM), and supports external flash. (I wonder if you could boot Linux on this thing?)

rvense 1325 days ago [-]
These are all basically ten-year-old parts, aren't they?
ChuckMcM 1325 days ago [-]
Yes but hey, the core ARM ISA is like 40 years old. The key is that they are in fact "low cost SoCs" which is not something I knew existed :-).

Its really too bad the dev boards are so expensive but I get you need a lot of layers to route that sort of BGA.

rvense 1324 days ago [-]
Sure, the ARM ISA is old, but a few things have happened in microarchitecture since then. I wouldn't be rushing to use a 10-year-old ARM over a newer one. The Cortex cores are pretty great compared to ARM9 or whatever.
dragontamer 1324 days ago [-]
The ARM Cortex-M3 was released in 2006 and is still a popular microcontroller core. Microcontrollers have a multi-decade long lifespan. (I'm still seeing new 8051-based designs...)

There are still new chips using the Cortex-M3 today. Microcontroller devs do NOT want to be changing their code that often.

New chips move the core to lower-and-cheaper process nodes (and lower the power consumption), while otherwise retaining the same overall specifications and compatibility.

dragontamer 1325 days ago [-]
They're all low end embedded parts with highly integrated peripherals. Basically: a microcontroller.

No different than say, Cortex-M0 or M0+ in many regards (although ARM scales down to lower spec'd pieces).

na85 1326 days ago [-]
I miss DIP chips that would fit on breadboards. I don't have steady enough hands to solder QFP onto a PCB, and I'm too cheap to buy an oven :(
Ecco 1326 days ago [-]
You’re supposed to drag-solder those. Look it up on YouTube, it’s super easy. The hardest part is positioning the chip, but it’s actually easier than with an oven, because you can rework it if you only solder one or two pins :)
IshKebab 1326 days ago [-]
There's an even easier way than drag soldering - just solder it normally, without worrying about bridges. You can put tons of solder on it.

The use some desoldering braid to soak up the excess solder. It will remove all the bridges and leave perfect joints.

na85 1326 days ago [-]
Wow, just looked up a video and some guy did an 0.5mm pitch chip pretty darn quickly. Thank you!
Ecco 1326 days ago [-]
You’re welcome! Also, flux. Lots of it. Buy some good one, and use tons of it. Then clean the hell out of your PCB!
ChuckMcM 1326 days ago [-]
https://www.digikey.com/catalog/en/partgroup/smt-breakout-pc... can help. I have hand soldered STM32s to this adapter (use flux, a small tip)
foldr 1326 days ago [-]
A cheap toaster oven or hot air tool works fine for these. Or, as others have said, a regular soldering iron with lots of flux.
awill 1326 days ago [-]
the war is over. Arm has won. That dominance will take a long time to fade. AWS and Apple's future is Arm.
darksaints 1326 days ago [-]
OpenPOWER is pretty awesome but would be nowhere near as awesome as an OpenItanium. IMHO, Itanium was always mismarketed and misoptimized. It made a pretty good server processor, but not so good that enterprises were willing to migrate 40 year old software to run on it.

In mobile form, it would have made a large leap in both performance and battery life. And it would have been a fairly easy market to break into: the average life of a mobile device is a few years, not a few decades. Recompilation and redistribution of software is the status quo.

anarazel 1326 days ago [-]
IMO VLIW is an absurdly bad choice for a general purpose processor. It requires baking in a huge amount of low level micro-architectural details into the compiler / generated code. Which obviously leads to problems with choosing what hardware generation to optimize for / not being able to generate good code for future architectures.

And the compiler doesn't even come close to having as much information as the CPU has. Which basically means that most of the VLIW stuff just ends up needing to be broken up inside the CPU for good performance.

dragontamer 1326 days ago [-]
VLIW was the best implementation (20 years ago) of instruction level parallelism.

But what have we learned in these past 20 years?

* Computers will continue to become more parallel -- AMD Zen2 has 10 execution pipelines, supporting 4-way decode and 6-uop / clock tick dispatch per core, with somewhere close to 200 registers for renaming / reordering instructions. Future processors will be bigger and more parallel, Ice Lake is rumored to have over 300-renaming registers.

* We need assembly code that scales to all different processors of different sizes. Traditional assembly code is surprisingly good (!!!) at scaling, thanks to "dependency cutting" with instructions like "xor eax, eax".

* Compilers can understand dependency chains, "cut them up" and allow code to scale. The same code optimized for Intel Sandy Bridge (2011-era chips) will continue to be well-optimized for Intel Icelake (2021 era) ten years later, thanks to these dependency-cutting compilers.

I think a future VLIW chip can be made that takes advantage of these facts. But it wouldn't look like Itanium.

----------

EDIT: I feel like "xor eax, eax" and other such instructions for "dependency cutting" are wasting bits. There might be a better way for encoding the dependency graph rather than entire instructions.

Itanium's VLIW "packages" is too static.

I've discussed NVidia's Volta elsewhere, which has 6-bit dependency bitmasks on every instruction. That's the kind of "dependency graph" information that a compiler can provide very easily, and probably save a ton on power / decoding.

jabl 1326 days ago [-]
I agree there is merit in the idea of encoding instruction dependencies in the ISA. There have been a number of research projects in this area, e.g. wavescalar, EDGE/TRIPS, etc.

It's not only about reducing the need for figuring out dependencies at runtime, but you could also partly reduce the need for the (power hungry and hard to scale!) register file to communicate between instructions.

floatboth 1324 days ago [-]
Main lesson: we failed to make all the software JIT-compiled or AOT-recompiled-on-boot or something, that would allow retargeting the optimizations for the new generation of a VLIW CPU. Barely anyone even tried. Well I guess in the early 2000s there was this vision that everything would be Java, which is JIT, but lol
dragontamer 1324 days ago [-]
Your point seems invalid, in the face of a large chunk of HPC (neural nets, matrix multiplication, etc. etc.) getting rewritten to support CUDA, which didn't even exist back when Itanium was announced.

VLIW is a compromise product: its more parallel than a traditional CPU, but less parallel than SIMD/GPUs.

And modern CPUs have incredibly powerful SIMD engines: AVX2 and AVX512 are extremely fast and parallel. There are compilers that auto-vectorize code, as well as dedicated languages (such as ipsc) which work for SIMD.

Encoders, decoders, raytracers, and more have been rewritten for Intel AVX2 SIMD instructions, and then re-rewritten for GPUs. The will to find faster execution has always existed, but unfortunately, Itanium failed to perform as well as its competition.

floatboth 1324 days ago [-]
I'm not talking about rewrites and GPUs. I'm saying we do not have dynamic recompilation of everything. As in – if we would have ALL binaries that run on the machine (starting with the kernel) stored in some portable representation like wasm (or not-fully-portable-but-still-reoptimizable like llvm bitcode) and recompiled with optimization for the current exact processor when starting. Only that would solve the "new generation of VLIW-CPU needs very different compiler optimizations to perform, oops all your binaries are for first generation and they are slow now" problem.

GPUs do work like this – shaders recompiled all the time – so VLIW was used in GPUs (e.g. TeraScale). But on CPUs we have a world of optimized, "done" binaries.

ATsch 1325 days ago [-]
All of this hackery with hundreds of registers just to continue to make a massively parallel computer look like an 80s processor is what something like Itanium would have prevented. Modern processors ended up becoming basically VLIW anyway, Itanium just refused to lie to you.
dragontamer 1325 days ago [-]
When standard machine code is written in a "Dependency cutting" way, then it scales to many different reorder registers. A system from 10+ years ago with only 100-reorder registers will execute the code with maximum parallelism... while a system today with 200 to 300-reorder buffers will execute the SAME code with also maximum parallelism (and reach higher instructions-per-clock tick).

That's why today's CPUs can have 4-way decoders and 6-way dispatch (AMD Zen and Skylake), because they can "pick up more latent parallelism" that the compilers have given them many years ago.

"Classic" VLIW limits your potential parallelism to the ~3-wide bundles (in Itanium's case). Whoever makes the "next" VLIW CPU should allow a similar scaling over the years.

-----------

It was accidental: I doubt that anyone actually planned the x86 instruction set to be so effectively instruction-level parallel. Its something that was discovered over the years, and proven to be effective.

Yes: somehow more parallel than the explicitly parallel VLIW architecture. Its a bit of a hack, but if it works, why change things?

anarazel 1326 days ago [-]
I don't understand how an increase, including the implied variability, of CPU internal parallelism and VLIW benefits go together?
dragontamer 1326 days ago [-]
I'm talking about a mythical / mystical VLIW architecture. Obviously, older VLIW designs have failed in this regards... but I don't necessarily see "future" VLIW processors making the same mistake.

Perhaps from your perspective, a VLIW architecture that fixes these problems wouldn't necessarily be VLIW anymore. Which... could be true.

moonchild 1326 days ago [-]
Have you seen the mill cpu?
javajosh 1326 days ago [-]
Has anyone?
dralley 1325 days ago [-]
At the rate they're going, all the patents they've been filing will be expired by the time they get a chip out the door.
KMag 1326 days ago [-]
> And the compiler doesn't even come close to having as much information as the CPU has.

Unless your CPU has a means for profiling where your pipeline stalls are coming from, combined with dynamic recompilation/reoptimization similar to IBM's project DAISY or HP's Dynamo.

It's not going to do well as out-of-order CPUs that make instruction re-optimization decisions for every instruction, but I wouldn't rule out software-controlled dynamic re-optimization getting most of the performance benefits of out-of-order execution with a much smaller power budget, due to not re-doing those optimization calculations for every instruction. There are reasons most low-power implementations are in-order chips.

csharptwdec19 1325 days ago [-]
I feel like what you describe is possible. When I think of what Transmeta was able to accomplish in the early 2000s just with CMS, certainly so.
darksaints 1326 days ago [-]
Traditional compiler techniques may have struggled with maintaining code for different architectures, but a lot has changed in the last 15 years. The rise of widely used IR languages has led to compilers that support dozens of architectures and hundreds of instruction sets. And they are getting better all the time.

The compiler has nearly all of the information that the CPU has, and it has orders of magnitude more. At best, your CPU can think a couple dozen cycles ahead of what it is currently executing. The compiler can see the whole program, can analyze it using dozens of methodologies and models, and can optimize accordingly. Something like Link Time Optimization can be done trivially with a compiler, but it would take an army of engineers decades of work to be able to implement in hardware.

dragontamer 1326 days ago [-]
> At best, your CPU can think a couple dozen cycles ahead of what it is currently executing.

The 200-sized reorder buffer says otherwise.

Loads/stores can be reordered for 200+ different concurrent objects on modern Intel skylake (2015 through 2020) CPUs. And its about to get a bump to 300+ sized reorder buffers in Icelake.

Modern CPUs are designed to "think ahead" almost the entirety of DDR4 RAM Latency, allowing reordering of instructions to keep the CPU pipes as full as possible (at least, if the underlying assembly code has enough ILP to fill the pipelines while waiting for RAM).

> Something like Link Time Optimization can be done trivially with a compiler, but it would take an army of engineers decades of work to be able to implement in hardware.

You might be surprised at what the modern Branch predictor is doing.

If your "call rax" indirect call constantly calls the same location, the branch predictor will remember that location these days.

KMag 1326 days ago [-]
With proper profiling (say, reservoir sampling of instructions causing pipeline stalls), and dynamic recompilation/reoptimization like IBM's project DAISY / HP's Dynamo, you may get performance near a modern out-of-order desktop processor at the power budget of a modern in-order low-power chip.

You get instructions scheduled based on actual dynamically measured usage patterns, but you don't pay for dedicated circuits to do it, and you don't re-do those calculations in hardware for every single instruction executed.

It's not a guaranteed win, but I think it's worth exploring.

dragontamer 1326 days ago [-]
But once you do that, then you hardware optimize the interpreter, and then its no longer called a "dynamic recompiler", but instead a "frontend to the microcode". :-)
KMag 1326 days ago [-]
No doubt there is still room for a power-hungry out-of-order speed demon of an implementation, but you need to leave the door open for something with approximately the TDP of a very-low-power in-order-processor with performance closer to an out-of-order machine.
branko_d 1326 days ago [-]
Neo: What are you trying to tell me? That I can dodge "call rax"?

Morpheus: No, Neo. I'm trying to tell you that when you're ready, you won't need "call rax".

---

Compiler has access to optimizations that are at the higher level of abstraction than what CPU can do. For example, the compiler can eliminate the call completely (i.e. inline the function), or convert a dynamic dispatch into static (if it can prove that an object will always have a specific type at the call site), or decide where to favor small code over fast code (via profile-guided optimization), or even switch from non-optimized code (but with short start-up time) to optimized code mid-execution (tiered compilation in JITs), move computation outside loops (if it can prove that the result is the same in all iterations), and many other things...

saagarjha 1325 days ago [-]
There is no way a compiler can do anything for an indirect call that goes one way for a while and the other afterwards. A branch predictor can get both with if not 100% accuracy about as close to it as you can possibly get.
branko_d 1325 days ago [-]
Sure.

My point was simply that the compiler may be in position to disprove the assumption that this call is in fact dynamic (it may actually be static) or that it has to be a call in the first place (and inline the function instead).

I'm certainly not arguing against branch predictors.

formerly_proven 1325 days ago [-]
> The compiler has nearly all of the information that the CPU has, and it has orders of magnitude more.

The CPU has something the compiler can never have.

Runtime information.

That's why VLIW works great for DSP which is 99.9 % fixed access patterns, while being bad for general purpose code.

drivebycomment 1326 days ago [-]
Itanuim deserved its fiery death and resurrection doesn't make any sense whatsoever. It's a dead end architecture, and humanity gained (by freeing up valuable engineering power to other more useful endeavors) when it died.
darksaints 1326 days ago [-]
Itanium was an excellent idea that needed investment in compilers. Nobody wanted to make that investment because speculative execution got them 80% of the way there without the investment in compilers. But as it turns out, speculative execution was a phenomenally bad idea, and patching its security vulnerabilities has set back processor performance to the point where VLIW seems like a good idea again. We should have made those compiler improvements decades ago.
dragontamer 1326 days ago [-]
NVidia Volta: https://arxiv.org/pdf/1804.06826.pdf

Each machine instruction on NVidia Volta has the following information:

* Reuse Flags

* Wait Barrier Mask

* Read/Write barrier index (6-bit bitmask)

* Read Dependency barriers

* Stall Cycles (4-bit)

* Yield Flag (1-bit software hint: NVidia CU will select new warp, load-balancing the SMT resources of the compute unit)

Itanium's idea of VLIW was commingled with other ideas; in particular, the idea of a compiler static-scheduler to minimize hardware work at runtime.

To my eyes: the benefits of Itanium are implemented in NVidia's GPUs. The compiler for NVidia's compiler-scheduling flags has been made and is proven effective.

Itanium itself: the crazy "bundling" of instructions and such, seems too complex. The explicit bitmasks / barriers of NVidia Volta seems more straightforward and clear in describing the dependency graph of code (and therefore: the potential parallelism).

----------

Clearly, static-compilers marking what is, and what isn't, parallelizable, is useful. NVidia Volta+ architectures have proven this. Furthermore, compilers that can emit such information already exist. I do await the day when other architectures wake up to this fact.

StillBored 1326 days ago [-]
GPU's, aren't general purpose compute. EPIC did fairly well with HPC/etc style applications as well, it was everything else that was problematic. So, yes there are a fair number of workload and microarch decision similarities. But right now, those workloads tend to be better handled with a GPU style offload engine (or as it appears the industry is slowly moving, possibly a lot of fat vector units attached to a normal core).
dragontamer 1326 days ago [-]
I'm not talking about the SIMD portion of Volta.

I'm talking about Volta's ability to detect dependencies. Which is null: the core itself probably can't detect dependencies at all. Its entirely left up to the compiler (or at least... it seems to be the case).

AMD's GCN and RDNA architecture is still scanning for read/write hazards like any ol' pipelined architecture you learned in college. The NVidia Volta thing is new, and probably should be studied from a architectural point of view.

Yeah, its a GPU-feature on NVidia Volta. But its pretty obvious to me that this explicit dependency-barrier thing could be part of a future ISA, even one for traditional CPUs.

rrss 1325 days ago [-]
FWIW, this article suggests the static software scheduling you are describing was introduced in Kepler, so it's probably at least not entirely new in Volta:

https://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-r...

> NVIDIA has replaced Fermi’s complex scheduler with a far simpler scheduler that still uses scoreboarding and other methods for inter-warp scheduling, but moves the scheduling of instructions in a warp into NVIDIA’s compiler. In essence it’s a return to static scheduling

and I think this is describing more or less the same thing in Maxwell: https://github.com/NervanaSystems/maxas/wiki/Control-Codes

dragontamer 1325 days ago [-]
I appreciate the info. Apparently NVidia has been doing this for more years than I expected.
StillBored 1326 days ago [-]
I think your conflating OoO and speculative execution. It was OoO which the itanium architects (apparently) didn't think would work as well as it did. OoO and being able to build wide superscaler machines, which could dynamically determine instruction dependency chains is what killed EPIC.

Speculative execution is something you would want to do with the itanium as well, otherwise the machine is going to be stalling all the time waiting for branches/etc. Similarly, later itaniums went OoO (dynamically scheduled) because it turns out, the compiler can't know runtime state..

https://www.realworldtech.com/poulson/

Also while googling for that, ran across this:

https://news.ycombinator.com/item?id=21410976

PS: speculative execution is here to stay, it might be wrapped in more security domains and/or its going to just be one more nail in the business model of selling shared compute (something that was questionably from the beginning).

xgk 1326 days ago [-]

   questionably from the beginning
Agreed. If you look at what's the majority of compute loads (e.g. Instagram, Snap, Netflix, HPC) then that's (a) not particularly security critical, and (b) so big that the vendors can split their workload in security critical / not security critical, and rent fast machines for the former, and secure machines for the latter.

I wonder which cloud provider is the first to offer this in a coherent way.

Quequau 1326 days ago [-]
I dimly recall reading an interview with one of Intel's Sr. Managers on the Itanium project where he explained his thoughts on why Itanium failed.

His explanation centred on the fact that Intel decided early on that Itanium would only ever be an ultra high end niche product and only built devices which Intel could demand very high prices for. This in turn meant that almost no one outside of the few companies who were supporting Itanium development and certainly not most of the people who were working on other compilers and similar developer tools at the time, had any interest in working on Itanium because they simply could not justify the expense of obtaining the hardware.

So all the organic open source activity that goes on for all the other platforms which are easily obtainable by pedestrian users simply did not go on for Itanium. Intel did not plan on that up front (though in hindsight it seemed obvious) and by the time that was widely recognised within the management team no one was willing to devote the sort of scale of resources that were required for serious development of developer tools on a floundering project.

jabl 1326 days ago [-]
> Itanium was an excellent idea that needed investment in compilers.

ISTR that Intel & HP spent well over a $billion on VLIW compiler R&D, with crickets to show for it all.

How much are you suggesting should be spent this time for a markedly different result?

drivebycomment 1325 days ago [-]
By late 2000s, instruction scheduling research was largely considered done and dusted, with papers like:

https://dl.acm.org/doi/book/10.5555/923366 https://dl.acm.org/doi/10.1145/349299.349318

and many, many others (it produced so many PhDs in 90s). And, needless to say, HP and Intel hired so many excellent researchers during the heydays of Itanium. So I don't know on what basis you think there wasn't enough investment. So I have no choice but to assume you're ignorant of the actual history here, both in academics and industry.

It turns out instruction scheduling can not overcome the challenge of variable memory and cache latency, and branch prediction, because all of those are dynamic and unpredictable, for "integer" application (i.e. bulk of the code running on the CPUs of your laptop and cell phones). And, predication, which was one of the "solutions" to overcome branch misprediction penalties, turns out to be not very efficient, and is limited in its application.

For integer applications, it turns out the instruction level parallelism isn't really the issue. It's about how to generate and maintain as many outstanding cache misses at a time. VLIW turns out to be insufficient and inefficient for that. Some minor attempts are addressing that through prefetches and more elaborate markings around load/store all failed to give good results.

For HPC type workload, it turns out data parallelism and thread-level parallelism are much more efficient way to improve the performance, and also makes ILP on a single instruction stream play only a very minor role - GPUs and ML accelerators demonstrate this very clearly.

As for the security and the speculative execution, speculative execution is not going anywhere. Naturally, there are many researches around this like:

https://ieeexplore.ieee.org/abstract/document/9138997 https://dl.acm.org/doi/abs/10.1145/3352460.3358306

and while it will take a while before the real pipeline implements ideas like above thus we may continue to see some smaller and smaller vulnerabilities as the industry collectively plays whack-a-mole game, I don't see a world where the top of the line general-purpose microprocessor giving up on speculative execution, as the performance gain is simply too big.

I have yet to meet any academics or industry processor architects or compiler engineer who think VLIW / Itanium is the way to move forward.

This is not to say putting as much work to the compiler is a bad idea, as nVidia has demonstrated. But what they are doing is not VLIW.

xigency 1325 days ago [-]
> I never liked Softbank owning it, but hey someone has to.

I understand what you're saying and this seems to be the prevailing pattern but I really don't understand it. ARM could easily be a standalone company. For some reason, mergers are in.

ChuckMcM 1325 days ago [-]
I would like to understand what you know about ARM that it's board of director's didn't (doesn't) know? In my experience companies merge when they are forced to, not because they want to.

I have always assumed that their shareholders were offered so much of a premium on their shares that they chose to sell them rather than hold onto them. Clearly based on their fiscal 2015 results[1] they were a going concern.

[1] https://www.arm.com/company/news/2016/02/arm-holdings-plc-re...

xigency 1324 days ago [-]
I don't have any specific knowledge of the ARM sale but I've seen other mergers.

Clearly, if a buyer is willing to pay a premium for shares, it's because they believe the company is worth that premium. If the shareholders were optimistic, they might consider that as a signal of the company's underlying value and choose not to sell.

Sometimes activist shareholders will pressure a company to sell, like in the case of the Whole Foods sale to Amazon. Jana Partners owned ~8% and they motivated Whole Foods to look for buyers. They dumped their shares after the sale was announced and before it was executed. Whether that was the best for the other 92% of shareholders, the company's employees, and its customers is another question entirely.

My question is really outside of that. There will always be banks, investors, and companies that are motivated to consolidate and integrate companies to be increasingly competitive to the point of being anti-competitive. What is the counter force that helps maintain moderately sized companies that are profitable on their own?

Koshkin 1326 days ago [-]
The next Apple machine I am going to buy will be using RISC-V cores.
saagarjha 1326 days ago [-]
I suspect you'll be waiting for quite a long time, if not forever.
dmix 1326 days ago [-]
FWIW Apple isn't even listed on RISC-V's membership page:

https://riscv.org/membership/members/

While some like Google and Alibaba are listed as platinum founding members.

nickik 1326 days ago [-]
I agree that Apple wont do RISC-V but you don't need to be a member to us it.
gumby 1326 days ago [-]
Because of this transaction? I’m sure this deal will have absolutely no impact on apple’s deal with ARM.

Apple could of course Afford to invest in RISC-V (and surely has played with it internally) but they have enough control of their future under the current arrangement that it will be a long long time before they feel any need to switch — 15 years at least.

acomjean 1326 days ago [-]
Apple and Nvidia don't seem to see eye to eye. Nvidia doesn't support macs (CUDA support was pulled a year or two ago) and apples don't include Nvidia cards.

This could change.

gumby 1326 days ago [-]
This is because Nvidia screwed apple (from Apple’s POV) years ago with some bad GPUs to the point where Apple flat out refuses to source Nvidia parts. I don’t know the internal details of course just the public ones so can’t say if Apple is being petty or if Nvidia burned the bridge while the problem was unfolding.

Given that the CEO was the supply chain guy at the time I suspect the latter, as I’d imagine he’d be more dispassionate than Jobs.

In any case I seriously doubt nvidia could, much less would benefit from cancelling Apple’s agreement.

TomVDB 1326 days ago [-]
> ... to the point where Apple flat out refuses to source Nvidia parts.

I've seen this argument made before.

It would be a valid point if Apple stopped using Nvidia GPUs in 2008 (they did), and then never used them again. And yet, 4 years later, they used Nvidia GPUs on the 2012 MacBook Retina 15" on which I'm typing this.

ksec 1325 days ago [-]
And the 2012 GeForce GPU had GPU panic issues and let say higher chances of GPU failures.

And then that was that. We haven't seen Nvidia GPU again.

TomVDB 1325 days ago [-]
I haven’t seen any of those in 8 years, but I’ll take you at your word...

That said: AMD GPUs have also had their share of issues on MacBooks.

delfinom 1324 days ago [-]
AMD doesn't grossly lie about their thermal specifications like Nvidia consistently does. To the point engineers can't design shit properly. It's one thing to make mistakes in engineering, that can be smoothed out with cash. It's another to outright lie and cover up.
gumby 1326 days ago [-]
Thanks for this correction!
CamperBob2 1326 days ago [-]
ROFL. Why not wait for the upcoming Josephson junction memristors, while you're at it?
ibains 1326 days ago [-]
I love this, I was amongst early engineers on CUDA (compilers).

NVIDIA was so well run, but boxed into a smaller graphics card market - ATI and it were forced into low margins since they were made replaceable by OpenGL and DirectX standards. For the standard fans - they resulted a wealth transfer from NVIDIA to Apple etc. and reduced capital available for R&D.

NVIDIA was constantly attacked by a much bigger Intel (which changed interfaces to kill products and was made to pay by a court)

Through innovation, developing new technologies (CUDA) they increased market cap, and have used that to buy Arm/Mellanox.

I love the story of the underdog run by a founder, innovating it’s way to getting into new markets against harsh competition. Win for capitalism!

enragedcacti 1326 days ago [-]
Nvidia might have been an underdog once, but they are now the world's largest chipmaker, even surpassing Intel.

https://www.extremetech.com/computing/312528-nvidia-overtake...

mhh__ 1326 days ago [-]
And Intel's revenue remains 700% larger than Nvidia's
verall 1325 days ago [-]
Samsung and TSMC are both worth more. America's largest chipmaker.
justicezyx 1326 days ago [-]
The comment identified the positive side of the nvidia story. Note that nvidia had not had large acquisition for many years.

This acquisition can be seen as a beacon of nvidia's past struggle against the market and the competitors.

For whatever happened, nvidia innovated to their success, and had enabled possibly the biggest tech boom so far through deep learning. Might be one day everyone claimed nvidia to be the "most important company" on earth.

llukas 1326 days ago [-]
> The comment identified the positive side of the nvidia story. Note that nvidia had not had large acquisition for many years.

Not correct Mellanox was bought for $7B.

justicezyx 1326 days ago [-]
Bad me... Poor memory!
1326 days ago [-]
1325 days ago [-]
luxurycommunism 1325 days ago [-]
I'm excited to see what technology is being brought to the table. I don't think that Nvidia will settle with being only second to Apple's CPUs in performance.
walterbell 1326 days ago [-]
Talking points from the founders of Arm & Nvidia: https://www.forbes.com/sites/patrickmoorhead/2020/09/13/its-...

> Huang told me that first thing that the combined company will do is to, “bring NVIDIA technology through Arm’s vast network.” So I’d expect NVIDIA GPU and NPU IP to become available quickly to smartphone, tablet, TV and automobile SoC providers as quickly as possible.

> Arm CEO Simon Segars framed it well when he told me, “We're moving into a world where software doesn't just run in one place. Your application today might run in the cloud, it might run on your phone, and there might be some embedded application running on a device, but I think increasingly and with the rollout of 5g and with some of the technologies that Jensen was just talking about this kind of application will become spread across all of those places. Delivering that and managing that there's a huge task to do."

> Huang ... “We're about to enter a phase, where we're going to create an internet that is thousands of times bigger than the internet that we enjoy today. A lot of people don't realize this. And so, so we would like to create a computing company for this age of AI.”

topspin 1326 days ago [-]
My instincts are telling me this is smoke and mirrors to rationalize a $40E9 deal. The only part of that that computes at all is the GPU integration, and that only works if NVIDIA doesn't terrorize Arm licencees. The rest is buzzwords.
Ecco 1326 days ago [-]
Want to be nerdy and use powers-of-ten? Fine by me! But then please go all the way: $4E10!!!
jabl 1326 days ago [-]
Perhaps the parent prefers engineering notation, which uses exponents that are a multiple of three?
topspin 1326 days ago [-]
Got me.
mikkelam 1326 days ago [-]
That's normalized scientific notation, nothing wrong with 10E9. This is also allowed in several programming languages such as python.
ethbr0 1326 days ago [-]
Or Jensen believes that smart, convergent IoT is at a tipping point, and this is a bet on that.

Not all fashionable words are devoid of meaning.

systemvoltage 1326 days ago [-]
Makes sense. IoT was a buzzword in 2013. It is now a mature ecosystem and we've gotten a good taste and smell for it. Its on its way towards the plateau of productivity on the Gartner curve if I were to guess.
ncmncm 1326 days ago [-]
The sole meaning of the "Gartner Curve" is the amount of money available to be spent on hype, that Gartner can hope to get. It has the most tenuous imaginable relationship with the market for actual, you know, products.
manigandham 1326 days ago [-]
The Gartner Hype Cycle might be branded but the basic delta between overhyped technology vs actual production processes has been observed for a long time.
cnst 1326 days ago [-]
There's a pretty big assumption that the deal even gets approved.

Even if it does get approved, and even if NVIDIA decides to not screw up any of the licensees, the whole notion of NVIDIA being capable of doing so to any of their (NVIDIA's) competitors, will surely mean extra selling points for all of ARM competitors like MIPS, RISC-V etc.

sharken 1326 days ago [-]
Hopefully this deal will be stopped in it's tracks by regulators.

If not then it's very likely that NVIDIA will do everything in it's power to increase prices for ARM designs.

baybal2 1326 days ago [-]
Bye bye ARM Mali =(
paulmd 1326 days ago [-]
This is NVIDIA's "xbox one/PS4" moment. AMD has deals with console manufacturers, their clients pay for a huge amount of R&D that gets ported back into AMD's desktop graphics architecture. Even if AMD doesn't make basically anything on the consoles themselves, it's a huge win for their R&D.

Now, every reference-implementation ARM processor manufactured will fund GeForce desktop products, datacenter/enterprise, etc as well.

NVIDIA definitely needs something like this in the face of the new Samsung deal, as well as AMD's pre-existing console deals.

fluffything 1326 days ago [-]
> Now, every reference-implementation ARM processor manufactured will fund GeForce desktop products, datacenter/enterprise, etc as well.

That's like throwing pennies onto a pile of gold. NVIDIA makes billions of yearly revenue. ARM makes ~300 million. NVIDIA revenue is 60% of a GPU price. ARM margins in IoT/embedded/phone chips are thin-to-non-existent. If anything, NVIDIA will need to cut GPU spending to push ARM to the moon. And the announcement already suggest that this will happen.

janoc 1325 days ago [-]
ARM doesn't make any chips. They license IP (CPU cores and the MALI GPU) needed to build those chips to companies like Apple, Samsung, TI, ST Micro, Microchip, Qualcomm ...

That margins on $1 microcontroller are "thin-to-non-existent" is thus completely irrelevant - those are margins of the silicon manufacturer, not ARM's.

fluffything 1325 days ago [-]
So what's the marging of ARM per chip bought? ARM's IP isn't free. It is created by engineers that cost money. Those chip sell for 0.10$, so ARM's margin's can't be more than 0.10$ per chip, otherwise seller would be operating at a loss.
mr_toad 1326 days ago [-]
If I was an nVidia shareholder I wouldn’t want ARM profits to subsidise GPU development.

GPU profits should be able to cover their own R&D, especially given the obscene prices on the high end cards.

dannyw 1326 days ago [-]
Why not? The winning strategy since the 2000s have been to reinvest all profits into growing the business; not trying to chase juicy margins.
hajile 1325 days ago [-]
It's also constraining.

Console makers dictated that RDNA must use a forward-compatible version of the GCN ISA. While AMD might have wanted to make some changes of some ideas that turned out to be less-than-optimal, they cannot because they are stopped by the console makers paying the bills.

joshvm 1326 days ago [-]
Is the situation better with Mali? This seems something that might actually improve with Nvidia. I was under the impression that ARM GPUs are currently heavily locked down anyway (in terms of open source drivers). Nvidia would presumably still be locked down, but maybe we'd have more uniform cross platform interfaces like CUDA.
CameronNemo 1326 days ago [-]
Tegra open source support is great and actually supported by NVIDIA. Probably the best ARM GPU option for open source drivers.

Mali support is done by outside groups (not ARM). Midgard and Bifrost models are well supported (anything that starts with a T or G, respectively). Support for older models is a little worse, but better than some other GPUs.

Adreno support is done by outside groups (not Qualcomm) and is lagging behind new GPUs that come out considerably.

PowerVR GPUs (Imagination) have terrible open source support.

rhn_mk1 1326 days ago [-]
Tegra the chipset is supported by Nvidia, but even then they are not a paragon of cooperation with the Linux community. They have their own non-mainlined driver, while the mainlined one (tegra) is done by someone else.

They did contribute somewhat to the "tegra" driver at least.

janoc 1326 days ago [-]
Frankly, good riddance.
mlindner 1325 days ago [-]
Arm CEO really doesn't understand what's going on. There is no future where everything runs on the cloud. That simply cannot happen for legal reasons. Additionally the internet is getting more balkanized and that further is against the idea of the cloud. AI will not be running in the cloud, it will be running locally. Apple sees this but many others don't yet. You only run AI in the cloud if you want to monetize it with advertising.
scalablenotions 1325 days ago [-]
So all these AI SAAS companies are fake?
paulpan 1325 days ago [-]
My initial reaction is that this reminiscent of the AMD-ATI deal back in 2006. It almost killed both companies and comparatively, this deal size is much bigger ($40B vs. $6B) for both a more mature industry and companies involved.

$40B is an obscene lot of money objectively and what's the endgame for Nvidia? If it's to "fuse" ARM's top CPU designs with their GPU prowess, then couldn't they invest the money to restart their own CPU designs (e.g. Carmel)? My inner pessimist, as with others here, is that Nvidia will somehow cripple the ARM ecosystem or prioritize their own needs over those of other customers'. Perhaps an appropriate analogy is Qualcomm's IP licensing shenanigans and how they've crippled the non-iOS smartphone industry.

That said, there's also examples of companies making these purchases with minimal insidious behavior and co-existing with their would-be competitors: Microsoft's acquisition of Github, Google's Pixel smartphones, Sony's camera lenses business and even Samsung, which supposedly firewalls its components teams so the best tech is available to whoever wants (and is willing to pay for it).

I suppose if this acquisition ends up going through (big if), then we'll see Nvidia's true intent in 3-5 years.

dijit 1324 days ago [-]
I see two ways of it playing out for Nvidia:

1) Adoption of ARM CPU's (AWS Graviton, rPi etc) will cause software to be adapted to ARM anyway, meaning: Nvidia could come out with a full vertically integrated cloud.

or

2) Leveraging full vertical integration with ML based super computers.

acuozzo 1325 days ago [-]
> what's the endgame for Nvidia?

Announced at ARM TechCon last year: https://www.arm.com/products/silicon-ip-cpu/ai-platform

PragmaticPulp 1326 days ago [-]
I’m not convinced this is a death sentence for ARM. I doubt nVidia spent $40b on a company with the intention of killing it’s golden goose business model. The contractual agreements might change, but ARM wasn’t exactly giving their IP away for free before this move.
MattGaiser 1326 days ago [-]
It is less about them intentionally killing it and more about their culture and attitude killing it.
asdfasgasdgasdg 1326 days ago [-]
Does Nvidia have a habit of killing acquisitions? I'm only familiar with their graphics business, but as far as I can see the only culture going on there is excellence.
ATsch 1326 days ago [-]
The concern is more that nvidias culture has historically been being overall hostile to parterships. Which works great for what Nvidia is doing right now, but is probably bad for a company that depends heavily on partnerships.
p1necone 1326 days ago [-]
NVIDIAs whole schtick is making a bunch of interesting software and arbitrarily locking it to their own hardware. Doesn't seem compatible with being the steward for what has up until now been a relatively open CPU architecture.
ImprobableTruth 1326 days ago [-]
arbitrarily? Nvidia invests a lot in software R&D, why should they just give it away to their competitor AMD who basically invest nothing in comparison?
rocqua 1326 days ago [-]
Arbitrary as in, without technical reasons.

An open architecture, and business model based on partnership doesn't really synchronize with vendor locking your products for increased profits.

ip26 1326 days ago [-]
At issue is the conflict between ARM's business model- which revolves around licensing designs to other companies- and Nvidia's reputation of not playing nicely with other companies.
Hypx 1326 days ago [-]
You know how the tobacco companies work?

From a purely capitalistic standpoint, it's fine to kill off some of your customer base if you make more money from the remainder. If it can work for tobacco, you believe that Nvidia is will to kill off some of its customers if they can get the remainder to pay more.

scruffyherder 1326 days ago [-]
Unless they are putting explosives in the chips the customers will be free to go elsewhere
Hypx 1326 days ago [-]
If their software is dependent on the ARM ISA then they can't.
qubex 1325 days ago [-]
mumble mumble Turing complete mumble mumble
ckastner 1326 days ago [-]
Softbank paid $32B for ARM in 2016.

A 25% gain over a horizon of four years is not bad for your average investment -- but this isn't an average investment.

First, compared to the SP500, this underperforms over the same horizon (even compared to end of 2019 rather than the inflated prices right now).

Second, ARM's sector (semiconductors) has performed far, far better in that time. The PHOX (Philadelphia Semiconductor Index) doubled in the same time period.

And looking at AMD and NVIDIA, it feels as if ARM would have been in a position to benefit from the surrounding euphoria.

On the other hand, unless I'm misremembering, ARM back then was already considered massively overvalued precisely because it was such a prime takeover target, so perhaps its the $32B that are throwing me off here.

yreg 1326 days ago [-]
It vastly overperforms the Softbank ventures we usually hear about (excluding BABA).
smcl 1326 days ago [-]
To be honest, cash deposited in a boring current account outperforms Softbank's Vision Fund
lrem 1326 days ago [-]
There's also a fundamental threat to ARM in the raise of RISC-V.
janoc 1326 days ago [-]
RISC-V is only a tiiny player in the low end embedded space (there are only a few parts actually available with that instruction set) and no competition at all at the high end.

Maybe in a decade or so it may be more relevant but not today. Calling this a "fundamental threat" is a tad exaggerated.

kabacha 1326 days ago [-]
Where are people getting the "decade" number from? Looks demand for RISC-V just went up significantly and it's not like it's broken or not-existing, RISC-V "just worked" a year ago already: https://news.ycombinator.com/item?id=19118642
janoc 1325 days ago [-]
The "decade" comes from the fact that there is currently no RISC-V silicon with performance anywhere near the current ARM Cortex A series.

And a realistic estimate how long it will take to develop something that would be on par with today's Snapdragon, Exynos or Apple's chips is at least those 10 years. You need quite a bit more to have a high performance processor than just the instruction set.

The "just worked" chips are microcontrollers, something you may want to put in your toaster or fridge but not a SoC at the level of e.g. Snapdragon 835 (which is an old design, at that).

Also the Sipeed chips are mostly just unoptimized reference designs, they have a fairly poor performance.

Most people who talk and hype RISC-V don't realize this.

hajile 1325 days ago [-]
A ground-up new architecture takes 4-5 years.

Alibaba recently said their XT910 was slightly faster than the A73. Since the first actual A73 launched in Q4 2016, that would imply they are at most 4 years behind.

SiFive's U8 design from last year claimed to have the same performance as A72 with 50% greater performance per watt and using half the die area. Consider how popular the Raspberry PI is with it's A72 cores. With those RISCV cores, they could drastically increase the cache size and even potentially add more PCIe lanes within the same die size and power limits.

Finding out new things takes much more time than re-implementing what is known to already work. As with other things, the 80/20 rule applies. ARM has caught up a few orders of magnitude in the past decade. RISCV can easily do the same and give the lack of royalties. Meanwhile, the collaboration helps to share costs and discoveries which might mean progress will be even faster.

andoriyu 1325 days ago [-]
Hold on, the reason why RISV-V cores slower is that companies who make it doesn't have an existing backend or just got into CPU game.

I'm not saying Apple can drop-in RISC-V front-end to their silicon and call it a day, but you get the idea.

Sifive has a pretty decent chance at making performant chips within next few years.

lrem 1325 days ago [-]
But you don't buy a company for tens of billions ignoring anything that's not relevant today. You pay basing on what you predict the company will earn between today and liquidation. When Softbank originally bought ARM, no competition was on the radar. Now there is some. Hence, valuation drops.
janoc 1325 days ago [-]
Softbank bought ARM for $32 billion, sold for $40 billion, plus they were collecting profits during all that time. Not quite sure how that meshes with "valuation drops" ...

RISC-V in high performance computing is years out even if big players like Samsung or Qualcomm decided to switch today. New silicon takes time to develop.

And Nvidia really couldn't care less about the $1-$10 RISC-V microcontrollers that companies like SiPeed or Gigadevice are churning out today (and even there ARM micros are outselling these by several orders of magnitude).

lrem 1325 days ago [-]
Let me just quote what I originally responded to:

> A 25% gain over a horizon of four years is not bad for your average investment -- but this isn't an average investment.

> First, compared to the SP500, this underperforms over the same horizon (even compared to end of 2019 rather than the inflated prices right now).

> Second, ARM's sector (semiconductors) has performed far, far better in that time. The PHOX (Philadelphia Semiconductor Index) doubled in the same time period.

> And looking at AMD and NVIDIA, it feels as if ARM would have been in a position to benefit from the surrounding euphoria.

That's the "valuation drops" - relatively to the market, ARM has significantly underperformed, despite the business being actually healthy and on the rise.

redwood 1326 days ago [-]
The British should never have allowed foreign ownership of their core tech
bencollier49 1326 days ago [-]
I'm so exercised about this that I'm setting up a think tank to actively discuss UK control of critical tech (and "golden geese" as per others in this thread). If you're in tech and have a problem with this, please drop me a line, I'm @bencollier on Twitter.
ranbumo 1326 days ago [-]
Yes. It'd have been reasonable to block sales to non eu parties for national security reasons.

Now arm is yet another US company.

scarface74 1326 days ago [-]
Isn’t the UK leaving the EU?
mkl 1326 days ago [-]
Yes, but the Brexit referendum was only 1 month before SoftBank acquired Arm Holdings. The deal was probably already in progress, and finalised before the UK had any real policies about Brexit, so EU requirements would have been reasonable if decided ahed of time. But the timing may also explain the lack of any national security focused requirement (general confusion).
kzrdude 1326 days ago [-]
That just means that their home/core market got smaller, so they should have protected ARM to stay inside E̶U̶ the UK, then.
hajile 1325 days ago [-]
Britain and the US are politically, and economically entwined with each other. As a country, keeping core technology at home is tied to defense (don't buy your guns from your competitor). If they don't intend to go to war with the US, then there isn't any real defense loss (I'd also point out that making IP "blueprints" for a core is different from manufacturing the core itself). If Britain were to have a real issue, it would be the US locking down F35 jets they sell to their allies.
jasoneckert 1326 days ago [-]
Most tech acquisitions are fairly bland - they often maintain their separate ways for several years with a bit of integration. Others satisfy a political purpose or serve to stifle competition.

However, given the momentum of Nvidia these past several years alongside the massive adoption and evolution of ARM, this is probably going to be the most interesting acquisition to watch over the next few years.

zmmmmm 1326 days ago [-]
Many various reasons for this but one perspective I am curious about is how much this is actually a defensive move against Intel, because nVidia knows Intel is busy developing dedicated graphics via Xe, and if nVidia just allows that to continue they are going to find themselves simultaneously competing with and dependent on a vendor that owns the whole stack that their platform depends on. It is not a place I would want to be, even accounting for how incompetent Intel seems to have been for the last 10 years.

Edit: yes I meant nVidia not AMD!

jml7c5 1326 days ago [-]
How does AMD enter into this? Did you mean Nvidia?
zmmmmm 1326 days ago [-]
ouch I wrote a whole comment and systematically replaced Nvidia with AMD ... kind of impressive.

Thanks!

Tehdasi 1326 days ago [-]
Intel has been promising high-end graphics for decades, and delivering low end integrated graphics as a feature for their CPUs. Which makes sense, the market for CPUs is worth more than the market for game oriented GPUs. The rise of GPUs used in AI might change this calculation, but I doubt it. I suspect that nVidia just would like to move into the CPU market.
fluffything 1326 days ago [-]
Nvidia could have bought a world-class CPU architects team, and build their own ARM or RISC-V chips (NVIDIA has an infinite ARM license already).
UncleOxidant 1326 days ago [-]
On the bright side, this could end up being a big boost for RISC-V.
nickt 1326 days ago [-]
Probably worth a second look at this RISC desktop thread

https://news.ycombinator.com/item?id=19118642

kristianpaul 1326 days ago [-]
Indeed, looking right now at https://rioslab.org/.
miguelmota 1326 days ago [-]
Would love to see RISC-V catch up and be more widely adopted.
m00dy 1326 days ago [-]
and this would be killer for intel
hajile 1325 days ago [-]
Even if/when RISCV takes over, Intel and AMD will be in a unique position to offer "combination" chips with both x86 and RISCV cores which could milk the richest enterprise and government markets for decades to come.
UncleOxidant 1326 days ago [-]
Maybe? But somehow I don't think they'll be able to capitalize on it because Intel.
ykl 1326 days ago [-]
I wonder what this means for NVIDIA's recent RISC-V efforts [1]. Apparently they've been aiming to ship (or already have been shipping?) RISC-V microcontrollers on their GPUs for some time .

[1] https://riscv.org/wp-content/uploads/2017/05/Tue1345pm-NVIDI...

alexhektor 1325 days ago [-]
None of the top comments disuss the possibility of the deal not going through due to antitrust or other concerns by regulators. While it's owned by a Japanese company and being sold to an American one, China most likely doesn't approve and it could be a diplomatic issue due to security and intelligence concerns?

Not sure how relistic that scenario is, although I personally can very much see this being used as a negotiation vehicle, depending on the actual security concern (I'm obviously not an expert there..)

[1] https://www.globaltimes.cn/content/1200871.shtml

throwaway4good 1326 days ago [-]
Qualcomm and Apple are going to be fine even with NVIDIA owning ARM. They are American companies under the protection of US legislation and representation.

However the situation for Chinese companies is even clearer now. Huawei, Hikvision etc. need to move away from ARM. Probably on to their own thing as RISC-V is dominated by US companies.

vaxman 1325 days ago [-]
Qualcomm, Apple and NVidia will lose favor with the US government unless they bring (at least a full copy of) their FAB partners and the rest of their supply chains home to America (Southwest including USMCA zone). We love Southeast Asia, but the pandemic highlighted our vulnerability and, as a country, we’re not going to keep sourcing our critical infrastructure in China —or it’s back yard. If those American CEOs keep balking at the huge investment required, you will see the US government write massive checks to Intel (has numerous, albeit obsolete, domestic FABs), DELL and upstarts (like System76 in Colorado) to pick winners, while the elevator to hell gains a new stop in Silicon Valley and San Diego (nationalizing patents, etc) during a sort of “war effort” like we had in the early 1940s.
nl 1326 days ago [-]
Just noting that Apple doesn't have a perpetual license, they have an architecture license[1], including for 64bit parts[2].

This allows them to design their own cores using the Arm instruction set[3] and presumably includes perpetual IP licenses for Arm IP used while the license is in effect. New Arm IP doesn't seem to be included, since existing 32bit Arm licensees had to upgrade to a 64bit license[2].

[1] https://www.anandtech.com/show/7112/the-arm-diaries-part-1-h...

[2] https://www.electronicsweekly.com/news/business/finance/arm-...

[3] https://en.wikipedia.org/wiki/ARM_architecture#Architectural...

throw_m239339 1326 days ago [-]
Tangential, but when I hear about all these insane "start-up du jour" valuations, does anyone else feel like $40B isn't a lot of a hardware company sur as ARM?
beervirus 1326 days ago [-]
$40 billion is a real valuation though, as opposed to WeWork’s.
incognition 1326 days ago [-]
Maybe referencing Nikola?
broknbottle 1326 days ago [-]
are you suggesting a bunch of office space leases that are fully stocked with beer is not worth $40 billion?
krick 1325 days ago [-]
We've been repeating the word "bad" for the last couple of weeks here, but I don't really remember any insights on what can happen long term (and I'm asking, because I have absolutely no idea). I mean, let's suppose relationships with Qualcomm don't work out (which we all kind of suspect already). What's the alternative? Is it actually possible to create another competitive architecture at this point? Does it take 5, 10 years? Is there even a choice for some other (really big) company, that doesn't want to depend on NVIDIA?
axaxs 1326 days ago [-]
This seems fair, as long as it stays true(regarding open licensing and neutrality). I've mentioned before, I think this will ultimately be a good thing. NVidia has the gpu chops to really amp up the reference implementation, which is a good thing for competition in the mobile, settop, and perhaps even desktop space.
andy_ppp 1326 days ago [-]
No, what everyone thinks will happen is a pretend open ARM architecture and Nvidia CPUs dominating. Nvidia isn’t going to license the best GPU features they start adding.

It’s an excellent deal for NVIDIA of course, I’m certain they intend to make the chips they produce much faster than the ones they license (if they even ever release another open design) to the point where buying CPUs from Nvidia might be they only game in town. We’ll have to see but this is what I expect to happen.

fluffything 1325 days ago [-]
That's not what everyone thinks.

NVIDIA already has an CPU architect team building their own ARM CPUs with an unlimited ARM license.

ARM doesn't give NVIDIA a world-class CPU team like apple's, amazon's or fujitsu. ARM own cores are "meh" at best. Buying such a team, would also have been much cheaper than 40b$.

Mobile ARM chips are meh, but nvidia doesn't have GPUs for that segment, and their current architectures probably don't work well there. The only ARM chips that are ok-ish are embedded/IoT at < 1W power envelope. It would probably take nvidia 10 years to develop GPUs for that segment, the margins on that segment are razor thin (0.10$ is the cost of a full SoC on that segment), and it is unclear whether applications on that segment need GPUs (your toaster certainly does not).

The UK appears to require huge R&D investments in ARM to allow the sale. And ARMs bottom line is 300million $/year in revenue, which is peanuts for nvidia.

So if anything, ARM has a lot to win here with nvidia pumping in money like crazy to try to improve ARM's CPU offering. Yet this all seem super-risky because at the segments ARM is competing at, RISC-V competes as well, and without royalties. It is hard to compete against something that's free, even if it is slightly less good. And chances are that over the next 10 years RISC-V will have much better cores (NVIDIA themselves started replacing ARM cores with RISC-V cores in their GPUs years ago already...).

Either way, the claim that it is obvious to everybody what the 3D-chess being played here is false. To me this looks like a bad buy for nvidia. They could have paid 1 billion for a world class CPU team and just continue to license ARM and/or switch to RISC-V chips. Instead they are spending 40 billion in a company that makes 300 million a year, makes meh-cpus, is heavily regulated in the UK and the world, has problems with China due to being in the West, have to invest in the UK which is leaving the EU in a couple of weeks, etc.

axaxs 1325 days ago [-]
If you only look at today's numbers, it doesn't make a ton of sense. But looking at the market, the world is moving -more- towards ARM, not away. The last couple years have given us ARM in a gaming console, ARM in mainstream laptops, ARM in the datacenter. Especially as the world strives to go 'carbon neutral', ARM kills everything from Intel/AMD. So with that in mind, I don't think it's a bad buy, but time will tell.

RISC-V and ARM can coexist, but RISC-V in the mainstream is a far away due to nothing more than momentum. People won't even touch Intel in a mobile device anymore, not just because of power usage, but software compatibility.

rrss 1325 days ago [-]
> ARM doesn't give NVIDIA a world-class CPU team like apple's, amazon's or fujitsu.

Are you referring to the Graviton2 for Amazon? If so, you might be interested to learn that ARM designed the cores in that chip.

> (NVIDIA themselves started replacing ARM cores with RISC-V cores in their GPUs years ago already...).

The only info on this I'm aware of is https://riscv.org/wp-content/uploads/2017/05/Tue1345pm-NVIDI..., which says nvidia is replacing some internal proprietary RISC ISA with RISC-V, not replacing ARM with RISC-V.

yvdriess 1325 days ago [-]
> NVIDIA already has an CPU architect team building their own ARM CPUs with an unlimited ARM license.

Famously, the Tegra SoCs, as used in the Nintendo Switch.

Followerer 1325 days ago [-]
No. Precisely the Tegra SoC within the Nintendo Switch (X1) uses ARM Cores. Specifically A57 and A53. NVIDIA's project to develop their own v8.2 ARM-based chip is called Denver.
yvdriess 1324 days ago [-]
What do you define the difference between an in-house developed SoC with ARM-IP Cores and Denver's "ARM-based chip"? It is going to be a new architecture, but using a mix of ARM IP blocks and in-house IP following the ARM ISA?
andy_ppp 1325 days ago [-]
It’s a bad buy unless they plan to use this to leverage a better position. Your argument is that ARM is essentially worthless to NVIDIA or at least extremely high $40bn bet. I guess only time will tell but I think NVIDIA intend to make their money back on this purchase and that won’t be through the current licensing model (as your own figures show).
axaxs 1326 days ago [-]
Perhaps not. I mean AMD and soon Intel basically compete with themselves by pushing advances in both discrete GPU and APU at the same time, one negating the need for the other.

I'm not claiming I'm right and you're wrong, of course. I just think it's unfair to make negative assumptions at this point, so wanted to paint a possible good thing.

andy_ppp 1326 days ago [-]
Agree, Nvidia’s track record on opening things up is pretty bad. They are very good at business though!
WhyNotHugo 1326 days ago [-]
This it terrible news for the FLOSS community.

Nvidia has consistently for many years refused to properly support Linux and other open source OSs.

Heck, Wayland compositors just say "if you're using nvidia then don't even try to use our software" since they're fed up of Nvidia's lack of collaboration.

I really hope ARM doesn't go the same way. :(

janoc 1326 days ago [-]
ARM itself has little to no impact on open source community. They only license chip IP, they don't make any chips themselves. And most of the ARM architecture is documented and open, with the exception of things like the MALI GPU.

Whether or not some SoC (e.g. in a phone) is going to be supported by Linux doesn't depend on ARM but on the manufacturer of the given chip. That won't change in any way.

ARM maintains the GCC toolchain for the ARM architecture but that is unlikely to go anywhere (and even if it did, it is open source and anyone else can take it over).

The much bigger problem is that Nvidia could now start putting squeeze on chip makers who license the ARM IP for their own business reasons - Nvidia makes its own ARM-based ICs (e.g. the Jetson, Tegra) and it is hard to imagine that they will not try to use their position to stiffle the competition (e.g. from Qualcomm or Samsung).

hajile 1325 days ago [-]
https://developer.arm.com/tools-and-software/open-source-sof...

ARM directly maintains the main ARM parts of the Linux kernel among other things.

WhyNotHugo 1323 days ago [-]
> And most of the ARM architecture is documented and open

That's exactly the thing -- nvidia refuses to document their products, so third parties can't support it.

Should they extend their practices to ARM, then ARM support will quickly whither.

1326 days ago [-]
jpswade 1326 days ago [-]
I feel like this is yet more terrible news for the UK.
m0zg 1326 days ago [-]
On the one hand, this is bad news - I would prefer ARM to remain independent. But on the other, from a purely selfish standpoint, NVIDIA will likely lean on Apple pretty hard to get its GPUs into Apple devices again, which bodes well for GPGPU and deep learning applications.

Apple is probably putting together a RISC-V hardware group as we speak. The Jobs ethos will not allow them to depend this heavily on somebody else for such a critical technology.

buzzerbetrayed 1325 days ago [-]
A few weeks ago there were rumors that ARM was looking to be sold to Apple and Apple turned them down. If an NVIDIA acquisition is such a deal breaker for Apple, why wouldn't they have just acquired ARM to begin with?
totorovirus 1326 days ago [-]
nvidia is notorious for being not nice to oss developers as Linus Torvalds claims: https://www.youtube.com/watch?v=iYWzMvlj2RQ&ab_channel=Silic...

I wonder how Linux would react to this news.

tontonius 1326 days ago [-]
You mean how GNU/Linux would react to it?
RealStickman_ 1326 days ago [-]
Don't exclude the alpine folks please
maxioatic 1326 days ago [-]
> Immediately accretive to NVIDIA’s non-GAAP gross margin and EPS

Can someone explain this? (From the bullet points of the article)

I looked up the definition of accretive: "characterized by gradual growth or increase."

So it seems like they expect this to increase their margins. Does that mean ARM had better margins than NVIDIA?

Edit: I don't know what non-GAAP and EPS stand for

bluejay2 1326 days ago [-]
You are correct that it means they expect margins to increase. One possibility is that ARM has higher margins as you mentioned. Another is that they are making some assumptions about how much they can reduce certain expenses by, and once you factor in those savings, margins go up.
tyingq 1326 days ago [-]
I would guess they do have higher margins since they are mostly selling licenses and not actual hardware.

This article is old, but suggests a 48% operating margin: https://asia.nikkei.com/NAR/Articles/ARM-posts-strong-profit...

ericmay 1326 days ago [-]
EPS -> earnings per share

Non-GAAP -> doesn’t follow generaly accepted accounting practices. There are alternative accounting methods. GAAP is very US-centric (not good or bad, just stating a fact).

salawat 1326 days ago [-]
Though note the intent of GAAP is to cut down on "creative accounting" which can tend to mislead.
pokot0 1326 days ago [-]
GAAP is an accounting set of rules, EPS is Earning per Share. Basically means they think it will increase their gross margin and EPS but you can't sue them if it does not.
zeouter 1326 days ago [-]
Eeek. My gut reaction to think is.. could we have less powerful conglomerates.. please?
Lind5 1326 days ago [-]
Arm’s primary base is in the IoT and the edge, and it has been very successful there. Its focus on low power allowed it to shut out Intel from the mobile phone market, and from there it has been gaining ground in a slew of vertical markets ranging from medical devices to Apple computers. But as more intelligence is added to the edge, the next big challenge is to be able to radically improve performance and further reduce power, and the only way to make that happen is to more tightly customize the algorithms to the hardware, and vice versa https://semiengineering.com/nvidia-to-buy-arm-for-40b/
MangoCoffee 1326 days ago [-]
https://semiwiki.com/ip/287846-tears-in-the-rain-arm-and-chi...

Does this mean Nvidia will have to deal with the hot mess at China ARM?

justincormack 1325 days ago [-]
Allegedly it has been sorted, before this deal was announced. No idea how.
hastradamus 1325 days ago [-]
Why would Nvidia spend $40B to ruin Arm? I can't see them making a return on this investment. No one wants to work with Nvidia they are notoriously roothless. I'm sure everyone is making plans to move to something else ASAP. Maybe RISC-V
mzs 1315 days ago [-]
Some decision makers there dislike Apple that much.
peterburkimsher 1326 days ago [-]
Sounds like everyone is rallying around RISC-V. What does this mean for MIPS?

"ARM was probably what sank MIPS" - saagarjha

https://news.ycombinator.com/item?id=24402107

Zigurd 1325 days ago [-]
Wave owns MIPS, about which I had no idea and googling that also returns that Wave went Chapter 11 this year.
bleepblorp 1326 days ago [-]
This isn't going to do good things for anyone who doesn't own a lot of Nvidia stock.

This is going to do especially bad things for anyone who needs to buy a cell phone or the SoC that powers one. There's no real alternative to ARM-based phone SoCs. Given Nvidia's business practices, any manufacturer who doesn't already have a perpetual ARM license should expect to have to pay a lot more money into Jensen Huang's retirement fund going forward. These costs will be passed on to consumers and will also provide an avenue for perpetual license holders to raise their consumer prices to match.

jacques_chester 1326 days ago [-]
> This isn't going to do good things for anyone who doesn't own a lot of Nvidia stock.

If it makes you feel any better, studies of acquisitions show that most of them are duds and destroy acquirer shareholder value.

imtringued 1326 days ago [-]
Well, the commenter is actively worried about Nvidia destroying shareholder value. If you destroy ARM in the process of acquiring it the combined company will be worth less in the long run. If the acquisition was actually motivated by synergy then Nvidia could have gotten away with a much cheaper license from ARM.
bgorman 1326 days ago [-]
Android worked on x86 and MIPS in the past, it could presumably be ported to work with RISC-V
janoc 1326 days ago [-]
You would first need an actual working RISC-V silicon that it would be worth porting to. Not the essentially demo chips with poor performance that are around now.

RISV-V is a lot of talk and hype but the actual silicon that you could buy and implement into a product is hard to come by. With the exception of a few small microcontroller efforts (GigaDevice, Sipeed).

bleepblorp 1326 days ago [-]
Android still works on x86-64; indeed there are quite a few actively maintained x86-64 Android ports that are used, both on bare metal PCs and virtualized, for various purposes.

The problem is that there are no x86 SoCs that are sufficiently power efficient to be battery-life competitive with ARM SoCs in phones.

seizethegdgap 1326 days ago [-]
Can confirm, just spun up Bliss OS on my Surface Book. Not at all the smoothest experience I've ever had using an Android tablet, but it's nice.
Zigurd 1325 days ago [-]
MIPS support was removed from the Android NDK, though older versions of the NDK stand a good chance of working, still. So app developers with components needing the NDK support have a bit of work to remain compatible.
saagarjha 1326 days ago [-]
It might pretty much work today, perhaps with a few config file changes.
yangcheng 1326 days ago [-]
I am surprised that no one has mentioned China will very likely block this deal.
mnd999 1326 days ago [-]
I really hope so. The UK government should block it also, but I don't think they will.
zaptrem 1326 days ago [-]
On what grounds/with what authority?
genocidicbunny 1326 days ago [-]
Is it that hard to see that China might see this as an American company further monopolizing silicon tech, potentially cutting China off from Arm designs.

But more to the point, this is also China. If you want to do business in China, you're going to do as they tell you, or you get the stick. And if you don't like it, what are you going to do?

yangcheng 1326 days ago [-]
is this a question or just sarcasm? The post said clearly "The proposed transaction is subject to customary closing conditions, including the receipt of regulatory approvals for the U.K., China, the European Union and the United States. Completion of the transaction is expected to take place in approximately 18 months."
incognition 1326 days ago [-]
Are you thinking retribution for Huawei?
ttflee 1326 days ago [-]
China de-facto blocked Qualcomm-NXP merger during a trade talk by not providing a decision before the deadline to the deal.
runeks 1325 days ago [-]
The root problem here is the concept of patents -- at least as far as I can see.

If patents did not exist, and nVidia were to close down ARM and tell people "no more ARM GPUs; only nVidia GPUs from now on", then a competitor who offers ARM-compatible ISAs would quickly appear. But in the real world, nVidia just bought the monopoly rights to sue such a competitor out of existence.

It's really no wonder nVidia did this given the profits they can extract from this monopoly (on the ARM ISA).

filereaper 1326 days ago [-]
Excellent, can't wait for Jensen to pull out a Cortex A-78 TI from his oven next year. /s
broknbottle 1326 days ago [-]
hodl out for the Titan A-78 Super if you can, I hear it runs a bit faster
hyperpallium2 1326 days ago [-]
edge-AI!

They're also building an ARM supercomputer at Cambridge, but server-ARM doesn't sound like a focus.

I'm just hoping for some updated mobile Nvidia GPUs... and maybe the rumoured 4K Nintendo Switch.

They say they won't muck it up, and it seems sensible to keep it profitable:

> As part of NVIDIA, Arm will continue to operate its open-licensing model while maintaining the global customer neutrality that has been foundational to its success, with 180 billion chips shipped to-date by its licensees.

fluffything 1325 days ago [-]
> but server-ARM doesn't sound like a focus.

ARM doesn't have good server CPU IP. Graviton, A64FX, etc. belong to other companies.

Followerer 1325 days ago [-]
Graviton is Neoverse, ARM's Neoverse.
ianai 1326 days ago [-]
They sure seem to be marketing this as a logical move for their AI platform.
mikorym 1325 days ago [-]
Wow, didn't someone call this out recently on HN? I mean, someone mentioned that this was going to happen. (Or rather, feared that this was the direction things were going in.)

On a different topic, how would this influence Raspberry Pis going forward?

paxys 1326 days ago [-]
No doubt because Softbank was facing investor pressure to make up for their losses.
1326 days ago [-]
easton 1326 days ago [-]
Could Apple hypothetically use their perpetual license to ARM to license the ISA to other manufacturers if they so desired? (not that they do now, but it could be a saving grace if Nvidia assimilated ARM fully).
Wowfunhappy 1326 days ago [-]
I'm pretty sure they can't, but I also think there's no way in hell they'd do it if they could. It's not in Apple's DNA. Better for them if no one else has access to the instruction set.

I bet they'd make a completely custom ISA if they could. Heck, maybe they plan to some day, and that's why they're calling the new Mac processors "Apple Silicon".

pier25 1326 days ago [-]
> I bet they'd make a completely custom ISA if they could. Heck, maybe they plan to some day, and that's why they're calling the new Mac processors "Apple Silicon".

That was my first thought.

I'd be surprised if Apple wasn't already working on this.

mhh__ 1326 days ago [-]
I could see apple being the next in line of basically failed VLIW ISAs - the cost/benefit doesn't really add up to completely redesign, prove, implement, and support a new ISA unless it was worth it technologically.

If they could pull it off I would be very impressed.

exikyut 1326 days ago [-]
Thinking about that though, from a technical perspective keeping the name but changing the function would only produce bad PR.

- "We changed from the 68K to PPC" "Agh!...fine"

- "We changed from PPC to x86" "What, again?"

- "We changed from x86 to Apple Silicon" "...oooo...kay..."

- "We changed from Apple Silicon to, uh, Apple Silicon - like it's still called Apple Silicon, but the architecture is different" "What's an architecture?" "The CPU ISA." "The CPU is a what?"

pokot0 1326 days ago [-]
Why would they ever want to enable their competitors?
newsclues 1326 days ago [-]
To spite nVidia.
hndamien 1326 days ago [-]
If Larry David has taught me anything, it is that spite stores are not good business.
mumblerino 1326 days ago [-]
It depends on who you are. Are you a Larry David or are you a Mila Kunis?
netheril96 1324 days ago [-]
Well, China will surely block this, just like how it blocked the Qualcomm attempt to buy NXP (https://www.reuters.com/article/us-nxp-semicondtrs-m-a-qualc...).

Otherwise ARM would become a US company, and then the US would have another weapon in their arsenal to sanction China's tech companies (e.g. Huawei).

m00dy 1326 days ago [-]
Can anyone point me who would be competing with Nvidia in AI Market ?
joshvm 1326 days ago [-]
There's low power inference from Intel (Movidius) and Google (Coral Edge TPU). Nvidia doesn't really have anything below the Jetson Nano. I think there are a smattering of other low power cores out there (also dedicated chips in phones). TPUs are used on the high performance end and there are also companies like Graphcore who do insane things in silicon. Also niche HPC products like Intel Knight's Landing (Xeon Phi) which is designed for heterogeneous compute.

There isn't a huge amount of competition in the consumer/midrange sector. Nvidia has almost total market domination here. Really we just need a credible cross platform solution that could open up gpgpu on AMD. I'm surprised Apple isn't pushing this more, as they heavily use ML on-device and to actually train anything you need Nvidia hardware (eg try buying a Macbook for local deep learning training using only apple approved bits, it's hard!). Maybe they'll bring out their own training silicon at some point.

Also you need to make a distinction between training and inference hardware. Nvidia absolutely dominate model training, but inference is comparably simpler to implement and there is more competition there - often you don't even need dedicated hardware beyond a cpu.

option 1326 days ago [-]
Google, Habana (Intel), AMD in a year or two, Amazon in few years
andy_ppp 1325 days ago [-]
Okay let’s look at this from a slightly different POV. We have two different things here, ARM CPU design licenses and ARM ISA licenses. If I was an ARM cpu maker I think it would be best to setup something to design better shared ARM CPU designs, the problem of course is building an amazing team and patents which could prove extremely expensive to solve.

Is it possible/legal for nvidia to charge a billion dollars per cpu for the isa license or are these things perpetual?

naruvimama 1325 days ago [-]
(Nvidia - ARM) - Nvidia >> 40 Bn

Only about 50% is gaming and nascent divisions like data centres can get a big boost from the acquisition.

We only connected Nvidia with GPUs, perhaps AI & ML. Now they are going to be a dominant player everywhere from consumer devices, IOT, cloud, HPC & Gaming.

And since Nvidia does not FAB its own chips like intel, this transformation is going to be pretty quick.

If only they go into public cloud business, we as costumers would have one other strong vendor to choose from.

gumby 1326 days ago [-]
Amidst all the hand-wringing: RISC-V is at least a decade away (at current pace + ARM’s own example). But what if Google bought AMD and put TPUs on the die?
thrwyoilarticle 1325 days ago [-]
AMD's x86 licence isn't transferrable. To acquire them is to destroy their value.
askvictor 1326 days ago [-]
Because so many of Google's acquisitions have ended up doing well...
sib 1326 days ago [-]
Android, YouTube, Google Maps, DoubleClick, Applied Semantics (became AdSense), DeepMind, Urchin, ITA Software, etc.

I think Google has done ok.

dbcooper 1326 days ago [-]
Has it had any successes with hardware acquisitions?
Zigurd 1325 days ago [-]
Google has steadily gained smart home device market share vs a very good competitor and is now the dominant player.
yogrish 1326 days ago [-]
SoftBank isa true banking company .. invested (Bought) in ARM and selling it now for meagre profit. A company that says it has 300 year vision
stunt 1324 days ago [-]
Maybe they are forced to choose behind the scene. Or, they know better opportunities and they need to cash out right now.
ComputerGuru 1326 days ago [-]
Doesn’t this need regulatory approval from the USA and Japan? (Not that the USA would look a gift horse in the mouth, of course.)
sgift 1326 days ago [-]
There is a note at the end of the article:

"The proposed transaction is subject to customary closing conditions, including the receipt of regulatory approvals for the U.K., China, the European Union and the United States. Completion of the transaction is expected to take place in approximately 18 months."

Hopefully, the EU does its job, laughs at this and tells Nvidia to either go home or forces them to FRAND licensing of ARM IP.

kasabali 1326 days ago [-]
> forces them to FRAND licensing of ARM IP

FRAND licensing is worthless, if Qualcomm taught us anything.

1326 days ago [-]
deafcalculus 1326 days ago [-]
nVidia clearly wants to compete with Intel in data centers. But how does buying ARM help with that? They already have an architectural license.

Right now, I can see nVidia replacing Mali smartphone GPUs in low to mid-end Exynos SoCs and the like. But it's not like nVidia to want to be in that low-margin area.

fluffything 1325 days ago [-]
> I can see nVidia replacing Mali smartphone GPUs in low to mid-end Exynos SoCs and the like.

Replacing these with what? What nvidia gpus can operate at that power envelope ?

deafcalculus 1325 days ago [-]
Lower power versions of what they put in Tegra / Switch? Or perhaps they can whip up something in 1-2 years. I'd be astonished if nVidia doesn't take any interest in smartphone GPUs after this acquisition.
fluffything 1325 days ago [-]
A nintendo switch has ~2 hours of battery...

There is a big difference between having interest in a market, and being able to compete in it. There are also many trade-offs.

Nobody has designed yet a GPU architecture that works at all from 500W HPC clusters to sub 1W embedded/IoT systems, much less that works well to be a market leader in all segments. So AFAICT whether this is even possible is an open research problem. If this were possible, there would already be nvidia GPUs at least in some smartphones and IoT devices.

andy_ppp 1326 days ago [-]
This sort of stuff really isn’t going to produce long term benefits for humanity is it.

Does anyone know if or how Apple will be affected by this? What are the licensing agreements on the ISA?

gumby 1326 days ago [-]
ARM and Apple go way back (Apple was one of the three original consortium members who founded ARM). I am sure they pay a flat fee and have freedom to do whatever they like (probably other than sub licensing).

Owning ARM would make no sense for them as they would gain no IP but would have to deal with antitrust which would force them to continue licensing the ip to others, which is not a business they are in.

If ARM vanished tomorrow I doubt it would affect apple’s business at all.

UncleOxidant 1326 days ago [-]
This has been out there in the news for well over a month, I guess I don't understand why Apple didn't try to make a bid for ARM? Or why Apple didn't try to set up some sort of independent holding company or consortium to buy ARM. They definitely have the money and clout to have done something like that.
andy_ppp 1326 days ago [-]
I guess Apple have a perpetual ARM ISA license. They haven’t used the CPU designs from ARM for many years.
dharma1 1326 days ago [-]
I doubt Apple's arm license will be affected. But I think they will be getting tougher competition in the sense that Android phones will be getting a fair bit better now - most of them will be using super well integrated Nvidia GPU/ML chips in the future because of this deal.

I think it will also bring Google and Nvidia closer together

curiousmindz 1326 days ago [-]
Why do you think Nvidia cares about Arm? Probably the "access" into a lot of industries?
banjo_milkman 1326 days ago [-]
I think this is driven by partly embedded applications and more directly by the datacenter.

Nvidia already own the parallel compute/ML part of the datacenter and the Mellanox acquisition had brought the ability to compete in the networking part of the datacenter - but they were missing CPU IP, for tasks that aren't well matched to the GPU. This plugs that hole. They are in control of a complete data-center solution now.

paulmd 1326 days ago [-]
There's a huge number of synergies that this deal provides.

(a) NVIDIA becomes a full-fledged full-stack house, they have both CPU and GPU now. They can now compete with AMD and Intel on equal terms. That has huge implications in the datacenter.

(b) GeForce becomes the reference implementation of the GPU, ARM processors now directly fund NVIDIA's desktop/datacenter R&D in the same way consoles and Samsung SOCs fund Radeon's R&D. CUDA can be used anywhere on any platform easily.

(c) Acqui-hire for CPU development talent. NVIDIA's efforts in this area have not been very good to date. Now they have an entire team that is experienced in developing ARM and can aim the direction of development where they want.

Basically there's a reason that NVIDIA was willing to pay more than anyone else for this property. And (Softbank's) Son desperately needed a big financial win to show his investors that he's not a fucking idiot for paying $32b for ARM and to make up for his other recent losses.

RantyDave 1326 days ago [-]
Quite. Nvidia will be able to make a single chip CPU, GPU and Infiniband. Plus they'll score some people that know about cache coherency from Arm. We can see the future datacentre starting to form...
andy_ppp 1326 days ago [-]
No to make a better ARM chip, with awesome graphics/AI, not licence the design and take all the mobile CPU profits for 5-10 years?
choiway 1325 days ago [-]
I'm confused. What is it that Nvidia can do by owning ARM that it can't do by just licensing the architecture? Can't they just license and build all the chips people think they'll build without buying the whole thing?
browserface 1326 days ago [-]
Vertical integration. It's the end products, not the producers that matter.

Or maybe, more accurately, the middle of the supply chain doesn't matter. The most value is at either end: raw materials and energy, and end products.

Or so it seems :p ;) xx

gautamcgoel 1326 days ago [-]
This is awful. Out of all the big tech companies, Nvidia is probably least friendly to open source and cross-platform comparability. It seems to me that their goal is to monopolize AI hardware over the next 20 years, the same way Intel effectively monopolized cloud hardware over the last 20. Expect to see less choice in the chip market and more and more propietary software frameworks like CUDA. A sad day for CS and for AI.
tony 1326 days ago [-]
Surprisingly - they have a working driver for FreeBSD. Never had an issue with it - and the performance is fantastic. As far back as early 2000's I remember installing proprietary nvidia drivers on Linux and playing UT2004.

Maybe Nintendo/Sony uses Nvidia cards on their developer machines? I imagine FreeBSD drivers aren't simply altruism on their part.

On the other hand, stagnation on other fronts:

- Nouveau (tried recently) is basically unusable on Ubuntu. As in the mouse/keyboard locks every 6 seconds.

- Proprietary drivers won't work with wayland

And since their stuff isn't open, the community can't do much to push Nouveau forward.

phire 1326 days ago [-]
Nvidia's "Blob" approach does have an advantage when it comes to supporting random OSes.

It's less of a driver and more of an operating system. Basically self-contained with all the support libraries it needs. Super easy to port to a new operating system and any driver improvement work on all OSes.

But the approach also has many downsides. It's big. It ignores all the native stuff (like linux's GEM interface).

It also has random issues with locking up the entire system. Like if you are debugging a process with the linux drivers, a breakpoint or a pause in the wrong place can deadlock the system.

loeg 1326 days ago [-]
You say that, but: Nvidia's "blob" driver doesn't have CUDA, NVENC/NVDEC, nor DDC I2C support on FreeBSD — despite all of this functionality being specific to Nvidia's own hardware, and the support being present in the Linux blob, which runs on a relatively similar platform.

If the only differing bits were the portability framework, this would just be a matter of adding missing support. But it isn't — the FreeBSD object file Nvidia publishes lacks the internal symbols used by the Linux driver.

magic_quotes 1326 days ago [-]
> If the only differing bits were the portability framework, this would just be a matter of adding missing support. But it isn't — the FreeBSD object file Nvidia publishes lacks the internal symbols used by the Linux driver.

In fact it is. For Linux and FreeBSD Nvidia distributes exactly the same blob for compilation into nvidia.ko; the blobs for nvidia-modeset.ko are slightly different. (Don't take my word for it, download both drivers and compare kernel/nvidia/nv-kernel.o_binary with src/nvidia/nv-kernel.o.) Nothing is locked in the closed source part.

loeg 1320 days ago [-]
The userspace CUDA/NVENC libraries are distributed as pre-compiled artifacts in the Linux driver, and no such FreeBSD library is distributed in the FreeBSD driver package.
magic_quotes 1326 days ago [-]
Native Linux stuff is not at all native for FreeBSD. If Nvidia suddenly decides to open their driver (to merge it into Linux), FreeBSD and Solaris support will be the first thing thrown out.
zamadatix 1326 days ago [-]
Not just developer machines e.g. the Nintendo Switch uses an Nvidia Tegra X1 and run FreeBSD. Lots of game consoles do, you don't have to worry about the GPL.
Shared404 1326 days ago [-]
I thought the switch ran a modified form of android, but just looked it up and found this on the Switch Wikipedia page:

> Despite popular misconceptions to the contrary, Horizon [The switch software's codename] is not largely derived from FreeBSD code, nor from Android, although the software licence[7] and reverse engineering efforts[8][9] have revealed that Nintendo does use some code from both in some system services and drivers.

That being said, at least one of the Playstation's runs a modified form of FreeBSD.

Edit: add [The ... codename]

loeg 1326 days ago [-]
The FreeBSD blob driver will drive a monitor, but it lacks a bunch of GPU hardware support the Linux one has: all of CUDA, NVENC/NVDEC, I²C DDC, and certainly more.
non-entity 1326 days ago [-]
I knew it was too good to be true.
gautamcgoel 1326 days ago [-]
Is the FreeBDD driver open source? Also, somehow I was under the impression they stopped maintaining the FreeBSD driver. Is that correct?
magic_quotes 1326 days ago [-]
> impression they stopped maintaining the FreeBSD driver.

It's maintained while simultaneously not receiving any new features.

throwaway2048 1326 days ago [-]
Note this includes support for newer cards, so its only a matter of time before FreeBSD support is EOL'd
loeg 1326 days ago [-]
https://www.nvidia.com/Download/driverResults.aspx/163239/en...

The latest FreeBSD driver is 450.66, published 2020-8-18. Supports RTX 20xx, GTX 16xx, GTX 10xx, and older.

magic_quotes 1326 days ago [-]
New cards are fully supported. Why do you think they aren't?
Conan_Kudo 1326 days ago [-]
They did. The proprietary NVIDIA driver is required on FreeBSD as well. The FreeBSD community just doesn't care about this.
dheera 1326 days ago [-]
The proprietary driver also doesn't support fractional scaling on 20.04 and I've been waiting ages for that.
therealmarv 1326 days ago [-]
The problem is more that AMD is sleeping in regard of GPU AI and good software interfaces.
burnte 1326 days ago [-]
Hardly sleeping, they have a fraction of the revenue and profit of NVidia and Intel. Revenue was half of Nvidia, profit was 5% of NVidia. Intel is even bigger. They only have so much in the way of R&D money.
adventured 1326 days ago [-]
> Revenue was half of Nvidia, profit was 5% of NVidia.

For operating income it's 25%, for net income it's 18%; not 5%.

Last four quarters operating income for AMD: $884 million

Last four quarters operating income for Nvidia: $3.5 billion

This speaks to the dramatic improvement in AMD's operating condition over the last several years. For contrast, in fiscal 2016 AMD's operating income was negative $382 million. Op income has increased by over 300% in just ~2 1/2 years. Increasingly AMD is no longer a profit lightweight.

burnte 1326 days ago [-]
I was using 2019 numbers rather than last 4 quarters. I also talked about revenue and profit, not operating anything.

AMD 2019 Revenue: $6.73b [1] NVIDIA 2019 Revenue: $11.72b [2]

Roughly half, as I said.

AMD 2019 Profit (as earnings per share): $0.30 [1] NVIDIA 2019 Profit (as earnings per share): $6.63 [2]

4.52%, rounds to 5%, as I said.

However, you still proved my point. Lightweight or not, they do not, and have not had the amount of money available compared to NVidia and Intel. It's growing, they'll be able to continue to invest, and they have an advantage in the CPU space that should last for another year or two, giving them a great influx of cash, and their focus on Zen 2 really paid off allowing them greater cash flow to focus on GPUs as well.

[1] https://ir.amd.com/news-events/press-releases/detail/930/amd... [2] https://nvidianews.nvidia.com/news/nvidia-announces-financia...

nerderloo 1326 days ago [-]
Earnings per share is only useful when you look at the stock price. Since number of outstanding shares are different between two companies, it's incorrect to measure company's profit based on EPS.
adventured 1326 days ago [-]
> AMD 2019 Profit (as earnings per share): $0.30 [1] NVIDIA 2019 Profit (as earnings per share): $6.63 [2]

> 4.52%, rounds to 5%, as I said.

You're misunderstanding how to properly compare profitability between two companies.

If Company A has 1 billion shares outstanding and earns $0.10 per share, that's $100m in profit.

If Company B has 10 billion shares outstanding and earns $0.05 per share, that's $500m in profit.

Company A is not 100% larger on profit just because they earned more per share. It depends on how many shares you have outstanding, which is what you failed to account for.

AMD's profit was not close to 5% of Nvidia's in 2019. That is what you directly claimed (as you're saying you went by the last full fiscal year).

AMD had $341m in net income in their last full fiscal year. Nvidia had $2.8 billion in net income for their last full fiscal year. That's 12%, not 5%. And AMD's operating income was 22% of Nvidia for the last fiscal year.

The trailing four quarters and operating income, is the superior way to judge the present condition of the two companies, rather than using the prior fiscal year. Especially given the rapid ongoing improvement in AMD's business. Regardless, even going by the last full fiscal year, your 5% figure is still wrong by a large amount.

Operating income is a key measure of profitability and it's a far better manner of gauging business profitability than net income at this point. That's because the modern net income numbers are partially useless as they will include such things as asset gains/losses during the quarter. If you want to read up more on it, Warren Buffett has pointed out the absurdity of this approach on numerous occasions (if Berkshire's portfolio goes up a lot, they have to report that as net income, even though it wasn't a real profit generation event).

I didn't say anything refuting your revenue figures, because I wasn't refuting them. I'm not sure why you mention that.

Polylactic_acid 1326 days ago [-]
ROCm is their product for compute and it still can't run on their latest NAVI cards which have been out for over a year while CUDA works on every nvidia card on day one.
reader_mode 1326 days ago [-]
Seeing their market performance I have no doubt they could get capital for R&D
zrm 1326 days ago [-]
Their market performance is a relatively recent development. R&D has a lead time.
Aeolun 1326 days ago [-]
It is frankly amazing AMD can keep up at all.
RantyDave 1326 days ago [-]
They are doing something very right over there.
slavik81 1326 days ago [-]
Aside from Navi support, which has already been mentioned, what would you like to see?
twblalock 1326 days ago [-]
On the other hand, if Nvidia wants ARM to succeed (and why else would they acquire it?), they can be a source of more competition in the CPU market.

I don't really see how this deal makes the CPU market worse -- wasn't the ARM market for mobile devices basically dominated by Qualcomm for years? Plus, the other existing ARM licensees don't seem to be impacted. On the other hand, I do see a lot of potential if Nvidia is serious about innovating in the CPU space.

QuixoticQuibit 1326 days ago [-]
NVIDIA’s hardware works on x86, PowerPC, and ARM platforms.

Many of their AI libraries/tools are in fact open source.

They stand to be a force that could propel ARM’s strength in data center and desktop computing. For some reason you’re okay with the current x86 duopoly held by AMD and Intel, both who have their own destiny over CPUs and GPUs.

The HN crowd is incredibly biased against certain companies. Why not look at some of the potential bright sides to this for a more nuanced and balanced opinion?

LanternLight83 1326 days ago [-]
There are good points on both sides; as a Linux user, I feel the effects of their proprietary drivers and uncooperative approach, while I can also appreciate that they've managed to work with the Gnome and KDE project to have some support under Wayland, and the contributions they've made to the machine learning communities. Ad a whole, I do think that the former outweighs the latter, and loath the acquisition, but do think that the resources they're packing will bring ARM to new heights for the majority users.
endgame 1326 days ago [-]
selectodude 1326 days ago [-]
"October 26, 2017"

I'm not coming out and saying it's gotten significantly better, but that is a three year old article and Nvidia-wayland does work on KDE and Gnome.

timidger 1326 days ago [-]
That's because they implemented egl streams, nothing has changed majorly since then. There's been a lot of talk, but no action towards unification. The ball is entirely in Nvidia's court and they continue to work together on this.
1326 days ago [-]
CountSessine 1326 days ago [-]
The funny thing about GBM vs EGLStreams, though, based on everything I’ve read online, is that there’s broad agreement that EGLStreams is the technically superior approach - and that the reason it’s been rejected by the Linux graphics devs is GBM, while inferior, has broad hardware vendor support. Apparently very few GPU vendors outside the big 3 had decent EGl implementations.
ddevault 1326 days ago [-]
That's not true. EGLStreams would be better suited to Nvidia's proprietary driver design, but it's technically inferior. This is the "broad agreement". No one but Nvidia wants EGLStreams, or we would have implemented it in other drivers.
CountSessine 1326 days ago [-]
You know I did a bit more reading about this, and it sounds like you’re right. There’s a great discussion about it here:

https://mesa-dev.freedesktop.narkive.com/qq4iQ7RR/egl-stream...

arp242 1326 days ago [-]
I don't really have an opinion on nVidia, as I haven't dealt with any of their products for over a decade; my own problem with this is somewhat more abstract: I'm not a big fan of this constant drive towards merging all these tech companies (or indeed, any company really). Perhaps there are some short-term advantages for the ARM platform, but on the long term it means a few small tech companies will have all the power, which doesn't strike me as a good thing.
QuixoticQuibit 1326 days ago [-]
I don’t disagree with you, but I see it two ways:

1. The continued conglomeratization in the tech sector is a worrying trend as we see fewer and fewer small players.

2. Only a large-ish company could provide effective competition in the CPU/ISA/architecture space against the current x86 duopoly.

arp242 1326 days ago [-]
I'm not so sure about that second point; I don't see why an independent ARM couldn't provide an effective competition? Server ARMs have been a thing for a while, and Apple has been working on ARM macbooks for a while. I believe the goal is even to completely displace Intel macBooks in favour of the ARM ones eventually.

The big practical issue is probably software compatibility and the like, and it seems to me that the Apple/macOS adoption will do more for that than nVidia ownership.

dis-sys 1326 days ago [-]
> Why not look at some of the potential bright sides to this for a more nuanced and balanced opinion?

because as a NVIDIA user of the last 20 years, I never saw such bright side when it comes to open source.

ianai 1326 days ago [-]
Yes I wonder the same thing about the sentiment against nvidia. It’s be helpful if there were some wiki about things they’ve killed or instances they’ve acted against foss systems.
mixmastamyk 1326 days ago [-]
Linus famously gave them the middle-finger.
paulmd 1326 days ago [-]
"linus went on a hyperbolic rant" isn't sufficient evidence for anything.

linus is a hyperbolic jerk (as he admitted himself for a few months before resuming his hyperbolic ways) who is increasingly out of touch with anything outside the direct sphere of his projects. Like his misguided and completely unnecessary rants about ZFS or AVX.

if there are technical merits to discuss you can post those instead of just appealing to linus' hyperbole.

(I won't even say "appeal to authority" because that's not what you're appealing to. You're literally appealing to his middle finger.)

arp242 1326 days ago [-]
He just responded to a complaint that nVidia hardware wasn't working with "nVidia is the worst company we've dealt with". I don't think that's "out of touch", it's pretty much the kind of stuff his job description entails. If you don't like his style, fair enough, but if the leader of a project says "company X is the worst company we deal with" then that doesn't inspire a whole lot of confidence.
paulmd 1325 days ago [-]
AFAIK the debate was mostly settled by NVIDIA submitting their own EGLStreams backend for Wayland (that promptly exposed a bunch of Wayland bugs). So difficult to work with, that NVIDIA, asking to do something different and then submitting their own code to implement it!

https://www.phoronix.com/scan.php?page=news_item&px=EGLStrea...

AFAIK it also ended up being literally a couple thousand lines of code, not some massive endeavor, so the Wayland guys don’t come off looking real great, looks like they have their own Not Invented Here syndrome and certainly a lot of generalized hostility towards NVIDIA. Like Torvalds, I'll be blunt, my experience is that a lot of people just know NVIDIA is evil because of these dozens of little scandals they’ve drummed up, and they almost all fall apart when you look into them, but people just fall back on asserting that NVIDIA must be up to something because of these 27 other things (that also fall apart when you poke them a bit). It is super trendy to hate on NVIDIA in the same way it’s super trendy to hate on Apple or Intel.

Example: everyone used to bitch and moan about G-Sync, the biggest innovation in gaming in 10 years. Oh, it's this proprietary standard, it's using a proprietary module, why are they doing this, why don't they support the Adaptive Sync standard? Well, at the time they started doing it, Adaptive Sync was a draft standard for power-saving in laptops that had languished for years, there was no impetus to push the standard through, there were no monitors that supported it, and no real push to implement monitors either. Why take 10 years to get things through a standards group when you can just take a FPGA and do it yourself? And once you've done all that engineering work, are you going to give it away for free? Back in 2016 I outright said that sooner or later NVIDIA would have to support Adaptive Sync or else lose the home theater market/etc as consoles gained support. People told me I was loony, "NVIDIA'S just not that kind of company", etc. Well, turns out they were that kind of company, weren't they? Turns out people were mostly mad that... NVIDIA didn't immediately give all their engineering work away for free.

The GPP is the only thing I’ve seen that really stank and they backed off that when they saw the reaction. Other than that they are mostly guilty of... using a software license you don’t like. It says a lot about the success of copyleft that anyone developing software with a proprietary license is automatically suspect.

The truth is that NVIDIA, while proprietary, does a huge amount of really great engineering in novel areas that HNers would really applaud if it were any other company. Going and making your own monitor from scratch with a FPGA so you can implement a game-changing technology is exactly the kind of go-getter attitude that this site is supposed to embody.

Variable refresh rate/GSync is a game changer. DLSS 2.0 is a game changer. Raytracing is a game changer. And you have NVIDIA to thank for all of those, "proprietary" and all. They would not exist today without NVIDIA, AMD or Intel would not have independently pushed to develop those, even though they do have open-source drivers. What a conundrum.

arp242 1325 days ago [-]
I'm not sure if Linus was talking about the Wayland stuff specifically; the answer was in response to a complaint that the nvidia/Intel graphics card switching didn't work on Linux.

I haven't used nvidia products for about 10 years and I' m not really in to gaming or graphics, so I don't really have an opinion on them either way, either business or technical. I used their FreeBSD drivers back in the day and was pretty happy it allowed me to play Unreal Tournament on my FreeBSD machine :-)

Linus is not always right, but a lot of what he says is often considerably more nuanced and balanced than his "worst-of" highlight reel suggests. There are plenty of examples of that in the presentation/Q&A he did from which this excerpt comes, for example (but of course, most people only see the "fuck you" part).

So if Linus – the person responsible for writing operating systems with their hardware – says they're the "worst company we deal with" then this strikes me as a good reason to at least do your research if you plan to buy hardware from them, if you intend to use it with Linux anyway. I'll take your word for it that they're doing great stuff, but if it outright refuses to work on my Linux box then that's kinda useless to me.

This was also 6 or 7 years ago I think, so perhaps things are better now too.

ianai 1325 days ago [-]
Thank you for this comment. I learned quite a bit! As an investor it makes sense to go with a company that takes the profit for a renovation for a while.
48bb-9a7e-dc4f2 1325 days ago [-]
"AFAIK". Your rant is wrong in so many levels it's not even funny.

Nvidia earned that hostility. It's not even worth replying when you're giving a comment in such bad faith. There's a search function with many many threads that dealt with this subject before if anyone wants a less biased view of EGLStream and Wayland.

1325 days ago [-]
andrewprock 1326 days ago [-]
The notion that the maintainer if Linux has a narrow focus role that isn't capable of advocating and serving the broader community is belied by the fact that Linux has grown from being a niche hobbyist platform to the de facto cloud standard OS.
paulmd 1325 days ago [-]
https://arstechnica.com/gadgets/2020/01/linus-torvalds-zfs-s...

https://www.zdnet.com/article/linus-torvalds-i-hope-intels-a...

/shrug. The guy can't keep from popping off with obviously false statements about things he knows nothing about. What exactly do you want me to say? Yes, he's been a good manager for the linux kernel, but he is self-admittedly hyperbolic and statements like these show that he really doesn't have an issue running his mouth about things that he really doesn't understand.

It is the old problem with software engineers: they think expertise in one field or one area makes them a certified supergenius with relevant input in completely unrelated areas. I can't count how many times I've seen someone on HN suggest One Weird Trick To Solve Hard Problems in [aerospace/materials sciences/etc]. Linus suffers from the same thing.

His experiences with NVIDIA are probably relevant, and if so we can discuss that, but the fact that he gave someone the middle finger in a Q+A session is not. That's just Linus being an asshole.

(and him being a long-term successful project manager doesn't make him not an asshole either. Jensen's an asshole and he's one of the most successful tech CEOs of all time. Linus doesn't mince words and we should do the same - he's an asshole on a professional level, the "I'm just being blunt!" schtick is just a nice dressing for what would in any other setting be described as a textbook toxic work environment, and his defenses are textbook "haha it's just locker room talk/we're all friends here" excuses that people causing toxic work environment are wont to make. He knows it, he said he'd tone it down, that lasted about a month and he's back to hyperbolic rants about things he doesn't really understand... like ZFS and AVX. But hey I guess they're not directed at people this time.)

Again, if he's got relevant technical input we can discuss that but "linus gives the middle finger!!!!" is not the last word on the topic.

andrewprock 1325 days ago [-]
I don't take issue with the fact that Linus' behavior is problematic.
mixmastamyk 1326 days ago [-]
A little defensive, no? I trust his judgement more than Mr. Random on the internets.
WrtCdEvrydy 1326 days ago [-]
Nvidia does not publish drivers, only binary blobs.

You generally have to wrap those and use them while not being foss-compatible.

dheera 1326 days ago [-]
> The HN crowd is incredibly biased against certain companies.

No kidding, regarding the HN crowd. -- every time I post a comment criticizing Apple's monopoly and policies I get downvoted to oblivion, and I'd say that some of the things they do e.g. on the Apple store and proprietary hardware/software combinations are far more egregious than anything Nvidia has ever done. The HN algorithm basically encourages an echo chamber of people who gang up and downvote/flag others who don't agree with the gang opinion.

(Psst ... If you see this comment disappear after a while, it's probably because the same Apple fanboys found this comment and decided to hammer it down again.)

dang 1326 days ago [-]
This is off topic and breaks the site guidelines. Would you please review them and stick to the rules?

This sort of $BigCo metaflamewar in which each side accuses HN of being shills for the opposite side is incredibly repetitive and tedious. It's also a cognitive bias; everyone feels like the community (and the moderators for that matter) is biased against whatever view they favor.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...

https://news.ycombinator.com/newsguidelines.html

Shared404 1326 days ago [-]
Haven't looked at the rest of your posts, but if this gets flagged, it's probably from this:

[Edit: I disagree with my past self of having put this comment here. This is something that if one feels that they notice, they should probably comment on. Leaving it here for clarity's sake]

> The HN algorithm basically encourages an echo chamber of people who gang up and downvote/flag others who don't agree with the gang opinion.

or this:

> (Psst ... If you see this comment disappear after a while, it's probably because the same Apple fanboys found this comment and decided to hammer it down again.)

than this:

> criticizing Apple's monopoly and policies

Source: Have criticized Apple without having been downvoted and flagged.

1326 days ago [-]
dheera 1326 days ago [-]
I think it depends on how you criticize them. I've criticized:

- The idea that we can trust Apple with privacy, when their OS isn't open source

- The idea that the Apple Store is doing good things in the disguise of privacy when they actually play corporate favoritism and unfairly monopolize the app market

- The idea that a PIN has enough entropy to be a good measure of security

- Right to repair, soldered-to-motherboard SSDs

- The environmental waste associated with the lack of upgradability

- Price gouging on upgrades ($400 for +512GB, anyone? When I can get 4TB on Amazon for $500?) and lack of user upgradability

I've been downvoted to hell for speaking the above. There are a bunch of brainwashed Apple worshippers lurking around here who don't accept any criticism of the holy.

I totally welcome engaging in an intelligent discussion about any of the above, but downvotes and flagging (in most cases without any response) doesn't accomplish any of that, and serves just to amplify the fanboy echo chamber.

Also, I don't think criticizing HN's algorithm or design should warrant being flagged or downvoted, either. Downvoting should be reserved for trolling, not intelligent protest. I'm also of the opinion that downvotes shouldn't be used for well-written disagreement, as that creates an echo chamber and suppresses non-majority opinions.

Shared404 1326 days ago [-]
> I've been downvoted to hell for speaking the above.

Fair enough. I don't think I've ever mentioned Apple by name, just participated in threads about them and commented on how I disagree about how they do some things.

I also went and reread some of my comments, and realized I haven't criticized Apple as much here as I'd originally thought. Apparently I mostly stick to that in meatspace.

> There are a bunch of brainwashed Apple worshippers lurking around here who don't accept any criticism of the holy.

As there are everywhere unfortunately. I haven't run into many here personally, but have no difficulty believing they are here.

> Also, I don't think criticizing HN's algorithm or design should warrant being flagged or downvoted, either. Downvoting should be reserved for trolling, not intelligent protest. I'm also of the opinion that downvotes shouldn't be used for well-written disagreement, as that creates an echo chamber and suppresses non-majority opinions.

I agree here as well. I was more trying to point out the combative tone then the content of the text. There was also some content that could be interpreted as accusations of shilling, which is technically against the guidelines.

To be clear -- I don't think your comment was accusing people of shilling, just that I can see how people could interpret it that way.

Edit: The first quote I took from your comment really shouldn't have been there. Sorry about that. I still think if you get flagged it's because of the "(Psst..." part though. That one does come off as a bit aggressive.

1326 days ago [-]
barbecue_sauce 1326 days ago [-]
What's the point of pretending that people don't know you will get downvoted for holding an unpopular opinion? It's tautological.
throwaway2048 1326 days ago [-]
The OP said nothing about being ok with the x86 duopoly.

Its possible to dislike two things at once, its also possible to be wary of new developments that give much more market power to a notoriously uncooperative and closed company.

m3kw9 1326 days ago [-]
Lol with the italics awful and your “seems to me” angle
starpilot 1326 days ago [-]
Hyperbole is how you get upvotes. No one really reacts to wishy-washy answers, it's the "OMG world is ending" provocative stuff that gets people engaged.
sillysaurusx 1326 days ago [-]
AI training is moving away from CUDA and toward TPUs anyway. DGX clusters can't keep up.
ladberg 1326 days ago [-]
And Nvidia's GPUs now include the same type of hardware that TPUs have, so there's no reason to believe that TPUs will win out over GPUs.
sillysaurusx 1326 days ago [-]
The key difference between a TPU and a GPU is that a TPU has a CPU. It's an entire computer, not just a piece of hardware. Is nVidia moving in that direction?
newsclues 1326 days ago [-]
They just bought ARM for 40$billion. I think they want to integrate CPU,GPU and high speed networks
dikei 1326 days ago [-]
In term of cutting edge tech, They have their own GPUs, CPUs from ARM, Networking from Mellanox, so I'd say they're pretty much set to build a kick ass TPU.
shaklee3 1326 days ago [-]
A TPU is a chip you cannot program. It's purpose built and can't run the fraction of the type of workloads that a GPU can.
sillysaurusx 1326 days ago [-]
I don't know where all of this misinformation is coming from or why, but, as someone who has spent the last year programming TPUs to do all kinds of things that a GPU can't do, this isn't true.

Are we going to simply say "Nu uh" at each other, or do you want to throw down some specific examples so I can show you how mistaken they are?

sorenbouma 1326 days ago [-]
I'm a TPU user and I'd be interested to see a specific example of something that can be done on TPU but not GPU.

Perhaps I'm just not experienced enough with the programming model, but I've found them to be strictly less flexible/more tricky than GPUs, especially for things like conditional execution, multiple graphs, variable size inputs and custom ops.

sillysaurusx 1326 days ago [-]
Sure! I'd love to chat TPUs. There's a #tpu discord channel on the MLPerf discord: https://github.com/shawwn/tpunicorn#ml-community

The central reason that TPUs feel less flexible is Google's awful mistake in encouraging everyone to use TPUEstimator as the One True API For Doing TPU Programming. Getting off that API was the single biggest boost to my TPU skills.

You can see an example of how to do that here: https://github.com/shawwn/ml-notes/blob/master/train_runner.... This is a repo that can train GPT-2 1.5B at 10 examples/sec on a TPUv3-8 (aka around 10k tokens/sec).

Happy to answer any specific questions or peek at codebases you're hoping to run on TPUs.

slaymaker1907 1326 days ago [-]
That doesn't answer the question of what a TPU can do that a GPU can't. I think the OP means impossible for the GPU, not just slower.
slaymaker1907 1326 days ago [-]
You can run basically any C program on a CUDA core even those requiring malloc. It may not be efficient but you can do it. Google themselves call GPUs general purpose and TPUs domain specific. https://cloud.google.com/blog/products/ai-machine-learning/w...
shaklee3 1326 days ago [-]
Please show me the API where I can write a generic function on a TPU. I'm talking about writing something like a custom reduction or a peak search, not offloading a tensor flow model.

I'll make it easier for you, directly from Google's website:

TPUs Cloud TPUs are optimized for specific workloads. In some situations, you might want to use GPUs or CPUs on Compute Engine instances to run your machine learning workloads.

Please tell me a workload a gpu can't do that a TPU can.

sillysaurusx 1326 days ago [-]
Sure, here you go: https://www.tensorflow.org/api_docs/python/tf/raw_ops

In my experience, well over 80% of these operations are implemented on TPU CPUs, and at least 60% are implemented on TPU cores.

Again, if you give a specific example, I can simply write a program demonstrating that it works. What kind of custom reduction do you want? What's a peak search?

As for workloads that GPUs can't do, we regularly train GANs at 500+ examples/sec across a total dataset size of >3M photos. Rather hard to do that with GPUs.

shaklee3 1326 days ago [-]
Well, there you go. For one TensorFlow is not a generic framework like cuda is, so you lose a whole bunch of the configurability you have with cuda. So, for example, even though there is an FFT raw function, there doesn't appear to be a way to do more complicated FFTs, such as an overlap-save. This is trivial to do on a GPU, and is built into the library. The raw functions it provides is not direct access to the hardware and memory subsystem. It's a set of raw functions that is a small subset of the total problem space. And certainly if you are saying that running something on a TPU's CPU cores are in any way going to compete with a gpu, then I don't know what to tell you.

You did not give an example of something GPUs can't do. all you said was that TPUs are faster for a specific function in your case.

sillysaurusx 1326 days ago [-]
For one TensorFlow is not a generic framework like cuda is, so you lose a whole bunch of the configurability you have with cuda

Why make generalizations like this? It's not true, and we've devolved back into the "nu uh" we originally started with.

This is trivial to do on a GPU, and is built into the library

Yes, I'm sure there are hardwired operations that are trivial to do on GPUs. That's not exactly a +1 in favor of generic programmability. There are also operations that are trivial to do on TPUs, such as CrossReplicaSum across a massive cluster of cores, or the various special-case Adam operations. This doesn't seem related to the claim that TPUs are less flexible.

The raw functions it provides is not direct access to the hardware and memory subsystem.

Not true. https://www.tensorflow.org/api_docs/python/tf/raw_ops/Inplac...

Jax is also going to be giving even lower-level access than TF, which may interest you.

You did not give an example of something GPUs can't do. all you said was that TPUs are faster for a specific function in your case.

Well yeah, I care about achieving goals in my specific case, as you do yours. And simply getting together a VM that can feed 500 examples/sec to a set of GPUs is a massive undertaking in and of itself. TPUs make it more or less "easy" in comparison. (I won't say effortless, since it does take some effort to get yourself into the TPU programming mindset.)

shaklee3 1326 days ago [-]
I gave you an example of something you can't do, which is an overlap-save FFT, and you ignored that completely. Please implement it, or show me any example of someone implementing any custom FFT that's not a simple, standard, batched FFT. I'll take any example of implementing any type of signal processing pipeline on TPU, such as a 5G radio.

Your last sentence is pretty funny: a GPU can't do certain workloads because one it can do is too slow for you. Yet it remains a fact that TPU cannot do certain workloads without offloading to the CPU (making it orders of magnitude slower), and that's somehow okay? It seems where this discussion is going is you pointed to a TensorFlow library that may or may not offload to a TPU, and it probably doesn't. But even that library is incomplete to implement things like a 5G LDPC decoder.

sillysaurusx 1326 days ago [-]
Which part of this can't be done on TPUs? https://en.wikipedia.org/wiki/Overlap%E2%80%93save_method#Ps... As far as I can tell, all of those operations can be done on TPUs. In fact, I linked to the operation list that shows they can be.

You'll need to link me to some specific implementation that you want me to port over, not just namedrop some random algorithm. Got a link to a github?

If your point is "There isn't a preexisting operation for overlap-save FFT" then... yes, sure, that's true. There's also not a preexisting operation for any of the hundreds of other algorithms that you'd like to do with signal processing. But they can all be implemented efficiently.

Yet it remains a fact that TPU cannot do certain workloads without offloading to the CPU (making it orders of magnitude slower), and that's somehow okay?

I think this is the crux of the issue: you're saying X can't be done, I'm saying X can be done, so please link to a specific code example. Emphasis on "specific" and "code".

shaklee3 1326 days ago [-]
Let's just leave this one alone then. I can't argue with someone who claims anything is possible, yet absolutely nobody seems to be doing what you're referring to (except you). A100 now tops all MLPerf benchmarks, and the unavailable TPUv4 may not even keep up.

Trust me, I would love if TPUs could do what you're saying, but they simply can't. There's no direct DMA from the NIC to where I can do a streaming application at 40+Gbps to it. Even if TPU could do all the things you claim, if it's not as fast as the A100, what's the point? To go through undocumented pain to prove something?

sillysaurusx 1326 days ago [-]
FWIW, you can stream at 10Gbps to TPUs. (I've done it.)

10Gbps isn't quite 40Gbps, but I think you can get there by streaming to a few different TPUs on different VPC networks. Or to the same TPU from different VMs, possibly.

The point is that there's a realistic alternative to nVidia's monopoly.

shaklee3 1326 days ago [-]
When I can run a TPU in my own data center, there is. Until then it precludes a lot of applications.
1326 days ago [-]
lostmsu 1326 days ago [-]
Where did you get this from? AFAIK GPT-3 (for example) was trained on a GPU cluster, not TPUs.
sillysaurusx 1326 days ago [-]
Experience, for one. TPUs are dominating MLPerf benchmarks. That kind of performance can't be dismissed so easily.

GPT-2 was trained on TPUs. (There are explicit references to TPUs in the source code: https://github.com/openai/gpt-2/blob/0574c5708b094bfa0b0f6df...)

GPT-3 was trained on a GPU cluster probably because of Microsoft's billion-dollar Azure cloud credit investment, not because it was the best choice.

lostmsu 1326 days ago [-]
I checked MLPerf website, and it looks like A100 is outperforming TPUv3, and is also more capable (there does not seem to be a working implementation of RL for Go on TPU).

To be fair, TPUv4 is not out yet, and it might catch up using the latest processes (7nm TSMC or 8nm Samsung).

https://mlperf.org/training-results-0-7

option 1326 days ago [-]
no they are not. Go read recent MLPerf results more carefully and not Google’s blogpost. NVIDIA won 8/8 benchmarks for publicly available SW/HW combo. Also 8/8 on per chip performance. Google did show better results with some “research” system which is not available to anyone other then them yet.
sillysaurusx 1326 days ago [-]
This is a weirdly aggressive reply. I don't "read Google's blogpost," I use TPUs daily. As for MLPerf benchmarks, you can see for yourself here: https://mlperf.org/training-results-0-6 TPUs are far ahead of competitors. All of these training results are openly available, and you can run them yourself. (I did.)

For MLPerf 0.7, it's true that Google's software isn't available to the public yet. That's because they're in the middle of transitioning to Jax (and by extension, Pytorch). Once that transition is complete, and available to the public, you'll probably be learning TPU programming one way or another, since there's no other practical way to e.g. train a GAN on millions of photos.

You'd think people would be happy that there are realistic alternatives to nVidia's monopoly for AI training, rather than rushing to defend them...

p1esk 1326 days ago [-]
transitioning to Jax (and by extension, Pytorch)

Wait, what? Why would transition to Jax imply transition to Pytorch?

llukas 1326 days ago [-]
You are basing your opinion on last year MLPerf and some stuff that may or may not be available in the future. MLPerf 0.7 "available" category has been ghosted by google.

Pointing this out is not aggressive.

make3 1326 days ago [-]
this is just false
hn3333 1326 days ago [-]
Softbank buys low ($32B in 2016) and sells high ($40B in 2020). Nice trade!
aneutron 1326 days ago [-]
How does it fare, adjusting for inflation and other similar factors ?
tuananh 1326 days ago [-]
i thought that would be low in investment world.
unnouinceput 1326 days ago [-]
4 years, 8 billions. 2 B / year. I don't think that's low. And during this time Arm was also filling its owner coffers. The only real question here is if they filled the coffers at a greater 2B/year or not. My guess is they didn't. Now SoftBank has a lot of cash to acquire more shining toys.
tuananh 1325 days ago [-]
i guess i watched too many tv shows where investors think anything below 50% ROI is low
hajile 1325 days ago [-]
5% over inflation is average. US congress manages 25-30%, but that's on the back of insider trading (technically illegal thanks to a law signed by Obama, but also without much ability to actually investigate -- also law signed by Obama a few months later without fanfare).
unixhero 1326 days ago [-]
It's so amazing Apple didn't win this M&A race.
rahoulb 1326 days ago [-]
Apple isn't interested in selling tech licences to other companies - they want to own their core technologies so they can sell products to consumers. And, as an original member of the Arm consortium, they have a perpetual licence to the Arm IP (I have no inside knowledge about that, just many people who know more than me have said it)
intricatedetail 1324 days ago [-]
Isn't pay for IP model used by companies to avoid paying tax? I wonder if Nvidia is going to be forced to absorbe Arm in its entirety to avoid tax issues?
wpdev_63 1324 days ago [-]
What do 5 hurricanes, global pandemic, and Nvidia buying Arm have in common?

What a wild year! Let's not forget Apple announce they are transitioning from x86 to Arm :b.

1326 days ago [-]
127 1326 days ago [-]
What does this change for STM32 and many other such low power MCUs? They're pretty ubiquitous in electornics.
ryanmarsh 1325 days ago [-]
Simple question, is this about ARM architecture and IP... or securing a worldwide competitive advantage in 5G?
vletal 1326 days ago [-]
Nvidia being like "Apple did now want to use our tech? Let's just buy ARM!"
jl2718 1325 days ago [-]
This is all about NVLink in ARM.
atg_abhishek 1324 days ago [-]
Has this become the post with the most points and engagement by number of comments?
kkielhofner 1325 days ago [-]
Looking at the conversations almost 24 hours after posting the IP, licensing, ecosystem, political, and overall business aspects of this have been discussed to death. Oddly for Hacker News there has been little discussion of the potential technical aspects of this acquisition.

Pure speculation (of course)...

To me (from a tech standpoint) this acquisition centers around three things we already know about Nvidia:

- Nvidia is pushing to own anything and everything GPGPU/TPU related, from cloud/datacenter to edge. Nvidia has been an ARM licensee for years with their Jetson line of hardware for edge GPGPU applications:

https://developer.nvidia.com/buy-jetson

Looking at the architecture of these devices (broadly speaking) Nvidia is combining an ARM CPU with their current gen GPU hardware (complete with Tensor Cores, etc). What's often left out of this mention is that they utilize a shared memory architecture where the ARM CPU and CUDA cores share memory. Not only does this cut down on hardware costs and power usage, it increases performance.

- Nvidia has acquired Mellanox for high performance network I/O across various technologies (Ethernet and Infiniband). Nvidia is also actively working to be able to remove the host CPU from as many GPGPU tasks as possible (network I/O and data storage):

https://developer.nvidia.com/gpudirect

- Nvidia already has publicly available software in place to effectively make their CUDA compute available over the network using various APIs:

https://github.com/triton-inference-server/server

Going on just the name Triton is currently only available for inference but it provides the ability to not only directly serve GPGPU resources via network API at scale but ALSO accelerate various models with TensorRT optimization:

https://docs.nvidia.com/deeplearning/triton-inference-server...

Given these points I think this is an obvious move for Nvidia. TDP and performance is increasingly important across all of their target markets. They already have something in place for edge inference tasks powered by ARM with Jetson but looking at ARM core CPU benchmarks it's sub-optimal. Why continue to pay ARM licensing fees when you can buy the company, collect licensing fees, get talent, and (presumably) drastically improve performance and TDP for your edge GPGPU hardware?

In the cloud/datacenter, why continue to give up watts in terms of TDP and performance to sub-optimal Intel/AMP/x86_64 CPUs and their required baggage (motherboard bridges, buses, system RAM, etc) when all you really want to do is shuffle data between your GPUs, network, and storage as quickly and efficiently as possible?

Of course many applications will still require a somewhat general purpose CPU for various tasks, customer code, etc. AWS already has their own optimized ARM cores in place. aarch64 is more and more becoming a first class citizen across the entire open source ecosystem.

As platform and software as a service continues to eat the world cloud providers likely have already started migrating the underlying hardware powering these various services to ARM cores for improved performance and TDP (same product, more margin).

Various ARM cores are already showing to be quite capable for most CPU tasks but given the other architectural components in place here even the lowliest of modern ARM cores is likely to be asleep most of the time for the applications Nvidia currently cares about. Giving up licensing, die space, power, tighter integration, etc to x86_64 just seems to be foolish at this point.

Meanwhile (of course) if you still need x86_64 (or any other arch) for whatever reason you can hit a network API powered by hardware using Nvidia/Mellanox I/O, GPU, and ARM. Potentially (eventually) completely transparently using standard CUDA libraries and existing frameworks (see work like Apex):

https://github.com/NVIDIA/apex

I, for one, am excited to see what comes from this.

cowsandmilk 1326 days ago [-]
What is Amazon's license for ARM like to make graviton processors?
gigatexal 1326 days ago [-]
How large or little of boost does this give the likes of RISC-V?
mlindner 1325 days ago [-]
This is really bad news. I hope the deal somehow falls through.
hankchinaski 1325 days ago [-]
this should have triggered an anti-trust probe into the deal, as this radically changes powers at play into the the chip market how many have outlined.
CivBase 1325 days ago [-]
I was hoping that Apple's switch to ARM would prompt better ARM support for popular Linux distros. Given NVIDIA's track record with the OSS community, I'm definitely less hopeful now.
jbotz 1325 days ago [-]
There is now but one choice... RISC-V, full throttle.
w_t_payne 1325 days ago [-]
Sounds like we need someone to found another ARM.
bfrog 1325 days ago [-]
So when does everyone switch to risc-v then
rvz 1326 days ago [-]
What a death sentence for ARM right there and the start of a starvation in a new microprocessor winter. I guess we now have to wait for RISC-V to catch up.

Aside from that, ARM was one of the only actual tech companies the UK could talk about on the so-called "world stage", that has survived more than 2 decades. But instead, they continue to sell themselves and their businesses to the US instead of vice versa.

In 2011, I thought that they would learn the lessons and warnings highlighted from Eric Schmidt about the UK creating long standing tech companies like FAANMG. [0] I had high hopes for them to learn from this, but after 2016 with Softbank and now this, it is just typical.

ARM will certainly be more expensive after this and will certainly be even more closed-source, since their Mali GPUs drivers were already as closed as Nvidia's GPUs. This is a terrible outcome I have seen but from Nvidia's perspective, it makes sense. From a FOSS perspective, ARM is dead, long live RISC-V.

[0] https://www.theguardian.com/technology/2011/aug/26/eric-schm...

throwaway5792 1326 days ago [-]
Once they were sold to SoftBank ARM had no more control over its destiny. You're saying that they're repeating the pattern, but they had no choice in this today. This was SoftBank's decision as the owner of ARM.
Aeolun 1326 days ago [-]
Softbank has some losses of it’s own to make up, so that’s not very surprising.
broknbottle 1326 days ago [-]
Losses? Masayoshi Son just made a cool 8 billion on this deal. Time to hit up the roulette table in Vegas to double that to 16 billion
throwaway5792 1325 days ago [-]
Buying a stake in AAPL,FB, or any tech company would have netted higher returns for less effort than buying ARM. If anything, a 25% ROI in 4 years is quite poor.
paulmd 1326 days ago [-]
love the armchair CEO perspectives that NVIDIA spent $40b only to flush the whole ARM ecosystem right down the drain, that's definitely a rational thing to do, right?

Jensen's not an idiot, how many OG 90s tech CEOs are still at the helm of the company they founded? Any severely negative moves towards their customers just drives them into RISC-V and they know that.

Yes, ARM customers will be paying more for their ARM IP. No, NVIDIA is not going to burn ARM to the ground.

therealmarv 1326 days ago [-]
Have you read the press release at all? It's too early to judge this now. ARM will stay in Cambridge and Nvidia wants to invest in this place.
klodolph 1326 days ago [-]
It's not too early to judge, because this has been in the news for a while and people have had the time to do an analysis of what it means for Nvidia to buy ARM. The press release doesn't add much to that analysis. We've seen a ton of press releases go by for acquisitions and they're always full of sunshine... at the moment, I'm thinking of Oracle's acquisition of Sun, but that's just one example. The typical pattern for acquisitions is that a tech company will acquire another tech company because they can extract more value from the acquired company's IP compared to the value the acquired company would have by itself, and you can extract a lot of value from an IP while you slash development budgets. Not saying that's going to happen, but it's a common pattern in acquisitions.

I think it's enough to know what Nvidia is, how they operate, and what their general strategies are.

Not saying I agree with the analysis... but I am not that optimistic.

dylan604 1326 days ago [-]
And when Facebook bought Oculus, they said no FB login would be required to use Oculus. FoxConn was supposed to ramp up production in the US according to a press release. That was then, now time has passed. bigCorp hopes your short term memory forgets the woohooism from a foregone press release.
BLKNSLVR 1326 days ago [-]
I'm reminded of this recent development:

https://www.oculus.com/blog/a-single-way-to-log-into-oculus-...

Discussed here on HN, with the top comment being a copy and paste of Palmer Lucky's acknowledgement that the early critics turned out to be correct:

https://news.ycombinator.com/item?id=24201306

Different situation, different companies, but if you "follow the money" you'll never be too far wrong.

boardwaalk 1326 days ago [-]
Corporations say these types of things with every acquisition. It might be true initially and superficially, but that's all.
paulmd 1326 days ago [-]
One of the (many) synergies in this acquisition is that it's also an acqui-hire for CPU development talent. NVIDIA's efforts in that area have not gone very smoothly. They aren't going to fire all the engineers they just bought.
wmf 1326 days ago [-]
A bigger concern is that they would hoard future cores and not license them.
RantyDave 1326 days ago [-]
They might, just like Apple do. But then their forty billion dollar investment would stagnate and be eaten by riscv, which is probably what they are hoping to avoid.
sjg007 1326 days ago [-]
I doubt they will move all of the talent to the USA. I mean California is nice and all but I think a lot of Brits are happy to stay in Cambridge.
mikhailfranco 1326 days ago [-]
Cambridge is one of the most beautiful cities on the planet.
sjg007 1325 days ago [-]
Agree!
causality0 1326 days ago [-]
Why would you read the press release at all? Do you expect a company to not do what's in their own financial self-interest? Look, I love nVidia. I only buy nVidia GPUs and I adore their devices like the SHIELD TV, handheld, tablet, even the Tegra Note 7. Even I can see that they're not just buying ARM on a whim. They intend to make that money back. Them using ARM to make that money is good for absolutely nobody except nVidia themselves.
therealmarv 1326 days ago [-]
Well it seems to be normal to judge without reading in 2020 and customize your news yourself. Time will tell...
causality0 1325 days ago [-]
I'll listen to what they're saying but I'm paying far more attention when I look at what they're doing. The only way buying ARM helps nVidia is by damaging their competitors, AKA almost everyone.
jacques_chester 1326 days ago [-]
My experience of acquisitions is that sweet songs are sung to calm the horses. Then the next financial quarter comes around and the truck from the glue factory arrives.

If you take at face value anything from a press release, earnings call or investor relations website, then I would like to take a moment to share with you the prospectus of Brooklyn Bridge LLC.

dylan604 1326 days ago [-]
My money is currently tied up in ocean front property in Arizona.
CleanItUpJanny 1326 days ago [-]
>long standing tech companies like FAANMG

why are people going out of their ways to avoid the obvious and intuitive "FAGMAN" acronym?

lacker 1326 days ago [-]
Netflix really doesn’t belong in the same category as the others. It’s big but not as big, and it isn’t a sprawling conglomerate. Clearly “FAGMA” is the best acronym.
whereistimbo 1326 days ago [-]
Or MAGA: Microsoft Apple Google (Alphabet) Amazon. $1 trillion dollar club.
broknbottle 1326 days ago [-]
I think you meant the four comma club
staz 1326 days ago [-]
GAFAM is often used in the french speaking world
gumby 1326 days ago [-]
Because smoking is no longer acceptable
realbarack 1326 days ago [-]
Because it has an offensive slur in it
stupendousyappi 1326 days ago [-]
People should be fighting to join their favorite Silicon Valley FANMAG. Some would prefer the Bill Gates FANMAG, others Jeff Bezos...
dylan604 1326 days ago [-]
Sounds more like a F-MANGA. Gates goes Super Saiyan while Bezos builds a mechwarrior army.
swarnie_ 1326 days ago [-]
Lost on the way to 4chan?
poxwole 1326 days ago [-]
It was good while it lasted. RIP ARM
1326 days ago [-]
oldschoolrobot 1325 days ago [-]
This is horrible for Arm
geogra4 1326 days ago [-]
smic and Huawei better be prepared to have to dump arm asap
1326 days ago [-]
fizzled 1326 days ago [-]
Wow, this is tectonic. I cannot wait to see how this redraws the competition map. There are dozens of major embedded semi vendors that license Arm IP. Nvidia could eradicate them trivially.
luxurycommunism 1326 days ago [-]
Hopefully that means we will see some big leaps in performance.
01100011 1326 days ago [-]
Can we,for once, hear the opinions of people in the chip industry and not the same tired posts from software folks? NVIDIA Bad! Ok, we get it. Do you have anything more insightful than that?

I'm starting to feel like social media based on upvotes is a utter waste of time. Echo chambers and groupthink. People commenting on things they barely know anything about and getting validation from others who don't know anything. I'd rather pay for insightful commentary and discussion. I feel like reddit going downhill has pushed a new group of users to HN and it's sending it down the tube. Maybe it's time for me to stop participating and get back to work.

diydsp 1326 days ago [-]
as an embedded dev, the last 2-4 years of STM (major ARM provider) have seen a large degree of integration of specialized hardware into 32-bit microcontrollers. e.g. radios, motor control, AI, 64-bit FPUs, graphics, low-power, etc.

I expect more of the same. The only way it could go wrong is if they lose customer focus. Microcontrollers are a competitive, near-commodity market, so companies have to provide valuable features.

I don't really know Nvidia well - I only buy their stock! - but they seem to be keeping their customers happy by paying attention to what they need. Perhaps their fabrication will be a boon to micros, as they're usually a few gens behind laptop/server processors.

systemvoltage 1326 days ago [-]
Totally. I see a future where people will have edge over others when they pay for information than relying on free sources (except Wikipedia due to its scale, but still, Wikipedia is not a replacement for a proper academic textbook).

FT provides insightful commentary from finance/business side of things and their subscription is expensive - rightfully so.

paxys 1326 days ago [-]
You are just starting to feel that now?
01100011 1326 days ago [-]
I know, right? It's finally hitting me. I am realizing how much of an asshole I've become thanks to the validation of a handful of strangers on the internet. I have caught myself commenting on things I have only cursory knowledge of and being justified by the likes of strangers. I actually believed that the online communities I participated in actually represented the world at large.

I'm not quite ready to end my participation in HN, but I'm close. I am looking back on the last 10 years of participation in forums like this and wondering what the hell good it did. I am also suddenly very worried for what sites like Reddit are doing to kids. That process of validation is going to produce some very anti-social, misguided adults.

I would rather participate in an argument map style discussion, or, frankly, just read the thoughts of 'experts'.

cycloptic 1326 days ago [-]
It is best to limit your exposure to these type of sites. There was a post yesterday about Wikipedia being an addictive MMORPG. Well, so are Hacker news, twitter, reddit, and so on...
drivebycomment 1326 days ago [-]
You're not wrong, but most of HN threads are like this. 80% of comments are low information, Dunning-Kruger effect in action. But among that, there are still some useful gems, so despite what you said, HN is still worth it. If you fold the first two top level comments, the rest have some useful , informed perspective.

I don't see this as having that much of an impact on any short to medium term. ARM has too much intricate business dependencies and contracts nVidia can't just get out of.

My speculation is that nVidia might be what it takes to push arm to overcome the final huddle into more general purpose and server cpu, and achieve that pipedream of a single binary/ISA running everywhere. Humanity would be better off if a single ISA does become truly universal. Whether business/technology politics will allow that to happen and whether nVidia has enough understanding and the shrewdness to pull that off is to be seen.

dahart 1326 days ago [-]
> Humanity would be better off of a single ISA does become truly universal.

This is an interesting thought. I think I would agree with this in the CPU world of the last 20-30 years, but it makes me wonder a few things. Might a universal ISA eliminate major pieces of competitive advantage for chip makers, and/or stall innovation? It does feel like non-vector instructions are somewhat settled, but vector instructions haven't yet, and GPUs are changing rapidly (take NVIDIA's Tensor cores and ray tracing cores for example). With Moore's law coming to an end, a lot of people are talking about special purpose chips more and more, TPUs being a pretty obvious example, and as nice as it might be to settle on a universal ISA, it seems like we're all about to start seeing larger differences more frequently, no?

> 80% of comments are low information, Dunning-Kruger effect in action.

I really liked all of your comment except this. I have a specific but humble request aside from the negative commentary & assumption about behavior. Please consider erasing the term "Dunning-Kruger effect" from your mind. It is being used here incorrectly, and it is very widely misunderstood and abused. There is no such effect. The experiments in the paper do not show what was claimed, the paper absolutely does not support the popular notion that confidence is a sign of incompetence. (Please read the actual paper -- the experiments demonstrated a positive correlation between confidence and competence!) There have been some very wonderful analyses of how wrong the Dunning-Kruger paper was, yet most people only seem to remember the (incorrect) summary that confidence is a sign of incompetence.

https://www.talyarkoni.org/blog/2010/07/07/what-the-dunning-...

drivebycomment 1326 days ago [-]
> Might a universal ISA eliminate major pieces of competitive advantage for chip makers, and/or stall innovation?

That's a good question. I didn't try to put all the necessary nuances in a single sentence, so you're right to question a lot of the unsaid assumptions. I don't know for sure at this point the innovation in ISA has run most of its course yet, but I do feel like we kind of have, given how relatively little difference it makes. I think a "truly" universal ISA, if it ever happens, would necessarily have to have a governance and evolution cycles, so that people will have to agree to the core part of the universal ISA, yet have a room and a way to let others experiment various extensions, including vector extensions for example, and have a process to reconcile and agree on the standard adoption. I don't know if that's actually possible - it might be very difficult or impossible for many different reasons. But if such can happen, that would be beneficial, as it would reduce certain amount of duplication, and unlock certain new possibilities.

> Please consider erasing the term "Dunning-Kruger effect" from your mind. It is being used here incorrectly

Duly noted.

saiojd 1326 days ago [-]
Agreed, upvotes are a failed experiment, especially for comments.
weregiraffe 1326 days ago [-]
Wow. It costs an arm.
QuixoticQuibit 1326 days ago [-]
HN being hyperbolic and anti-NVIDIA as usual. I think this is a great thing. Finally a competitor to the AMD-Intel x86 duopoly. I imagine the focus will first be on improving ARM’s data center offerings but eventually I’m hoping to see consumer-facing parts available sometime as well.
japgolly 1326 days ago [-]
I think the biggest concern is NVIDIA's stance against OSS.
QuixoticQuibit 1326 days ago [-]
Look at their AI/CUDA documentation and associated githubs. Many of their tools and libraries are open source.

Tell me, what other AI platform works with x86 and PowerPC and ARM? Currently NVIDIA’s GPUs do.

cycloptic 1326 days ago [-]
It's good that they open sourced that stuff, but the main problem is that CUDA itself is closed source and vendor locked to nvidia's hardware.
TomVDB 1326 days ago [-]
It only makes sense to demand them to give this away for free if you consider them a hardware company and a hardware company only.

Look at them as a solutions company and the first question to answer is: why is it fine for companies like Oracle, Microsoft, Adobe etc and any other software company to profit from closed software yet a company should be ostracized for it as soon as hardware becomes part of the deal?

Nvidia invested 15 years in developing and promoting a vast set of GPU compute libraries and tools. AMD has only paid lip service and to this day treats it as an ugly stepchild, they don't even bother anymore to support consumer architectures. Nvidia is IMO totally justified to reap the rewards of what they've created.

cycloptic 1326 days ago [-]
Please don't misrepresent my statement, I haven't demanded they give anything away for free. If you're referring to the CUDA drivers, the binaries for those are already free with the hardware. And if AMD truly doesn't care about it then that's even more reason for them to open source it, because they can't really claim they're keeping it closed source to hamstring the competition anymore.

wrt Oracle, Microsoft, Adobe, and the others: I've been asking them to open source key products every chance I get. Just so you know where I'm coming from.

sreeramb93 1326 days ago [-]
I dont buy laptops with nvidia GPUs because of nightmares I had working with them in 2014-2015. Has the support improved?
nolaspring 1326 days ago [-]
I spent most of this afternoon tying to get cuda in docker to work on my Mac for a machine learning use case. It doesn’t. Because nvidia
diesal11 1326 days ago [-]
"Because NVIDIA" is blatantly false.

CUDA support for docker containers is provided through the open source Nvidia-Docker project maintained by Nvidia[1]. If anything this is a great argument for NVIDIAs usage of open source.

Searching that project's issues shows that Nvidia-Docker support on MacOS is blocked by the VM used by Docker for Mac(xhyve) not supporting PCI passthrough, which is required for any containers to use host GPU resources.[2]

xhyve has an issue for PCI passthrough, updated a few months ago, which notes that the APIs provided by Apple through DriverKit are insufficient for this use case[3]

So your comment should really say "Because Apple"

[1] https://github.com/NVIDIA/nvidia-docker

[2] https://github.com/NVIDIA/nvidia-docker/issues/101#issuecomm...

[3] https://github.com/machyve/xhyve/issues/108#issuecomment-616...

kllrnohj 1326 days ago [-]
Apple is why you can't use Nvidia hardware on a Mac, not Nvidia. Apple has exclusive control over the drivers. Nvidia can't release updates or fix things on Mac OS.

I'm all for railing on the shitty things Nvidia does do, but no reason to add some made up ones onto the pile.

jefft255 1326 days ago [-]
« I spent most of the afternoon hacking away at some unsupported edge case on an unsupported platform which is inadequate for what I’m trying to do. It doesn’t work, which is clearly nvidia’s fault. »
fomine3 1326 days ago [-]
Because Apple. NVidia was tried to support Macs but Apple stops support.
alphachloride 1326 days ago [-]
I hope the stonks go up
RubberShoes 1326 days ago [-]
This is not good
paulmd 1326 days ago [-]
It wouldn't have been good if any of the people who could actually afford to buy ARM did so.

Would you rather have TSMC in control of ARM? Maybe have access to new architectures bundled with a mandate that you have to build them on TSMC's processes?

How about Samsung? All of the fab ownership concerns of TSMC plus they also make basically any tech product you care to name, so all the integration concerns of NVIDIA.

https://asia.nikkei.com/Business/Technology/Key-Apple-suppli...

Microsoft? Oracle? None of the companies who could have afforded to pay what Son wanted for Softbank were any better than NVIDIA.

There are a lot of good things that will come out of this as well. NVIDIA is a vibrant company compared to a lot of the others.

leptons 1326 days ago [-]
Microsoft can't afford to buy ARM? lol... well that's not true at all.
paulmd 1326 days ago [-]
But what would microsoft do with ARM at a $40b valuation?
leptons 1325 days ago [-]
Same goes for Nvidia.
chid 1326 days ago [-]
I have heard numerous arguments but they arguments don't feel that compelling, what are reasons why this is bad?
dragontamer 1326 days ago [-]
Consider one of NVidia's rivals: AMD, who uses an ARM chip in their EPYC line of chips as a security co-processor. Does anyone expect NVidia to "play fair" with such a rival?

ARM as an independent company, has been profoundly "neutral", allowing many companies to benefit from the ARM instruction set. It has been run very well: slightly profitable and an incremental value to all parties involved (be you Apple's iPhone, NVidia's Tegra (aka Nintendo Switch chip), AMD's EPYC, Qualcomm's Snapdragon, numerous hard drive companies, etc. etc.). All in all, ARM's reach has been because of its well-made business decisions that have been fair to all parties involved.

NVidia, despite all their technical achievements, is known to play hardball from a business perspective. I don't think anyone expects NVidia to remain "neutral" or "fair".

paulmd 1326 days ago [-]
> Consider one of NVidia's rivals: AMD, who uses an ARM chip in their EPYC line of chips as a security co-processor. Does anyone expect NVidia to "play fair" with such a rival?

Yes, absolutely.

NVIDIA's not going to burn the ARM ecosystem to the ground. They just paid $40 billion for it. And they only had $11b of cash on hand, they really overpaid for it (because SoftBank desperately needed a big win to cover for their other recent losses).

Now: will everybody (including AMD) probably be paying more for their ARM IP from now on? Yes.

dragontamer 1326 days ago [-]
When Oracle purchased Sun Microsystems for $7.4 Billion, did you expect Oracle to burn Solaris to the ground, and turn their back on MySQL's open source philosophy? Then sue Google for billions of dollars over the Java / Android thing?

Or more recently, when Facebook bought Oculus for $2 Billion, did you expect Facebook to betraying the customer's trust and start pushing Facebook logins?

The Oculus / Facebook login thin just happened weeks ago. Companies betraying the promises they made to their core audience is like... bread-and-butter at this point (and seems to almost always happen after an acquisition play). We know Facebook's modus operandi, and even if its worse for Oculus, we know that Facebook will do what Facebook does.

Similarly, we know NVidia's modus operandi. NVidia is trying to make a datacenter play and create a vertical company for high-end supercomputers. Such a strategy means that NVidia will NOT play nice with their rivals: Intel or AMD. (And the Mellanox acquisition is just icing on the cake now).

NVidia will absolutely leverage ARM to gain dominance in the datacenter. That's the entire point of this purchase.

--------

There's a story about scorpions and swimming with one on your back. I'm sure you've heard of it before. Just because its necessary for the scorpion's survival doesn't mean it is safe to trust the scorpion.

RantyDave 1326 days ago [-]
I'm not at all surprised they killed Solaris. Given that Oracle was pretty much the only software that ran on Solaris (or that there might be a good reason to run on Solaris), maybe it was just a big support headache. As for MySQL? Not surprised at all. They pretty much just bought a brand name, maybe just to spite red hat.
yyyk 1325 days ago [-]
>When Oracle purchased Sun Microsystems for $7.4 Billion, did you expect Oracle

Yes, yes, and yes. This is Oracle we're talking about. Of course they'll invest more in lawyers than tech. The only reason they still invest in Java is the lawsuit potential. If only Google had the smarts to buy Sun instead...

As for NVidia, their play probably is integration and datacenters. At the moment, going after other ARM licencees will hinder NVidia more than help (they're going after x86, no time to waste on bad PR and legal issues with small time ARM datacenter licencees; Qualcomm and Apple are in a different segment altogether). Of course, we can't guarantee it stays that way.

rtpg 1326 days ago [-]
> Or more recently, when Facebook bought Oculus for $2 Billion, did you expect Facebook to betraying the customer's trust and start pushing Facebook logins?

yes? I mean that felt eminently possible from the get-go.

whatshisface 1326 days ago [-]
Why does SoftBank's desperation make NVIDIA pay too much?
teruakohatu 1326 days ago [-]
An independent ARM that did not manufacture processors was the best outcome for everyone in the industry.

ARM being owned by an organisation deeply embedded in processor design and manufacturing, will now be licensing designs to competitors, as well as getting a lot of intel on its competitors.

ARM supercomputers were poised to take on Nvidia. Now its all one and the same.

As others have said, this will do wonders for RISC-V.

RobLach 1326 days ago [-]
ARM will not be cheaper after this.
stupendousyappi 1326 days ago [-]
Conversely, Nvidia has done a solid job of advancing GPU performance even in the face of weak competition, and with their additional resources, ARM performance may advance even faster, and provide the first competition in decades to x86 in servers and desktops.
teruakohatu 1326 days ago [-]
The tech may have advanced due to the insatiable hunger of machine learning, but the weak competition has meant pricing has not decreased as much as it should have or could have, only enough to move more GPUs. (nvidia biggest competitor are the GPUs they manufactured two years earlier).
dannyw 1326 days ago [-]
Really? You get 2x perf per dollar with Ampere. That’s not good enough on the pricing front?
teruakohatu 1326 days ago [-]
Yes really. Performance that is cleverly hampered by RAM (and driver licensing) on the low end from an ML perspective. The only reason they can do this is because of lack of competition. The performance of 3000 series cards could be dramatically improved for large models at a modest increase in price if RAM was doubled.

It really is possible to be critical of a monopoly without disparaging the product itself. It is when a true competitor arises that we see the monopolist's true capabilities (see Intel and AMD).

dannyw 1323 days ago [-]
The GeForce series is targeted at gamers. Video games are not making use of more than ~8GB of VRAM.

There is literally no benefit to 90% of the audience if they doubled the RAM. Of course, they also want to do market segmentation too, but you can’t blame GeForce for not being designed for ML training.

867-5309 1326 days ago [-]
would that affect the value of Raspberry Pi and other budget devices?
wetpaws 1326 days ago [-]
More power concentrated in a company that already has a de-facto monopoly over a gpu market.
fizzled 1326 days ago [-]
Ask Silicon Labs, NXP, STMicrolectronics, Dialog Semiconductor, ADI, Infineon, ON Semi, Renesas, TI, ... they all license Arm IP.
gruez 1326 days ago [-]
Bad for ARM, good for every other ISA.
bitxbit 1326 days ago [-]
They need to block this deal. A real clean case.
iso8859-1 1326 days ago [-]
Who needs to block it? And why is it a clean case?
sizzle 1326 days ago [-]
Holy shit this is huge. Did anyone see this coming?!?
sshlocalhost 1326 days ago [-]
I am really worried of a single company monopolising over the entire market
patfla 1326 days ago [-]
How does this get past the FTC? Oh right, it's been a dead letter since the Reagan administration. Monopoly 'R Us.

Never mind the FTC - the rest of the semiconductor industry has to be [very] strongly opposed.

hitpointdrew 1325 days ago [-]
LOL, Apple must be shitting bricks. Serves them right for going with ARM for their new Mac Books, the smarter move would have been to move to an AMD Ryzen APU, they also clearly should have gone with AMD Epyc for the new Mac Pro's.
yissp 1326 days ago [-]
I still think the real reason was just to spite Apple :)
RL_Quine 1326 days ago [-]
The company who was part of the creation of ARM and has a perpetual license to its IP? Tell me how.
hu3 1326 days ago [-]
Even perpetual license to future IP?
scarface74 1326 days ago [-]
Why would Apple need future ARM IP? They have more than enough in house talent to take their license in any direction they wish.
hu3 1326 days ago [-]
Such deal could certainly benefit both players. Hence my question.
ericmay 1326 days ago [-]
How does this spite Apple?
hetspookjee 1326 days ago [-]
I wonder what the this means for Apple and their move to ARM for their macbooks. End of 2019 Apple and NVIDIA broke up their cooperation on CUDA. Both these companies are very tight on their hardware. Apple must've known this was happening but I guess they weren't willing to pay more than 40B for this risky joint venture they're bound to go into.

Anyone has a proper analysis on the ramifications of this acquisition for Apple's future in ARM?

renewiltord 1326 days ago [-]
Apple is an ARM founder. You can bet your boots they made sure they were safe through the history of ARM's existence and sale to Softbank in the first place. No one can cite the deep magic to them, they were there when it was written.
shmerl 1326 days ago [-]
That's nasty Nvidia, very nasty. But on the other hand might be it will be a motivation for everyone to use ARM less.

I quite expect AMD for example to drop ARM chips from their hardware. Others should also follow suit. Nvidia is an awful steward for ARM.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 22:13:24 GMT+0000 (Coordinated Universal Time) with Vercel.