NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
The Snapdragon 855's iGPU (chipsandcheese.com)
darksaints 2 days ago [-]
I've been wanting to do an AskHN for this, but seeing as the Snapdragon855 is directly related to my question, I'm hoping someone here has some insights that could point me in the right direction. I'm currently developing a specialized UAV, and I've gotten to the point where I've got MCU-based FC hardware almost running that is based off of open source Pixhawk designs. But the long term plan is to introduce a variety of tasks that rely on computer vision and/or machine learning, and so the general approach is to go with a companion computer. It seems the default choice for things like this in the prototype stage is a raspberry Pi or Jetson module, but I would really like to develop the entire system as a single module with an eye toward serial production, which means I'm looking at embedding an SOC, as opposed to wiring up an external module. And that has led me to look at the Snapdragon SOCs, in large part due to the GPU capabilities. But now I'm getting into unfamiliar territory, because I can't do ordering/research via digikey like I have done with everything else up until now.

When we're talking about new development, what are the expectations that I should have for direct procurement of higher end components (this goes for Sony/Samsung camera sensors too)? Are there typically very large minimum purchase quantities? Do I have to already have a large production output before I can even talk to them about it? Is it possible to get datasheets even if I'm not at that stage yet?

I get the feeling I'm stuck with the Jetson + Off-the-shelf camera approach until I can demonstrate the ability to mass produce and sell my existing designs, but it would be nice if I could find out more about how that is supposed to play out.

adrian_b 1 days ago [-]
An alternative to Jetson are the SBCs or SOMs based on MediaTek Genio 1200 (MTK MT8395).

The CPU included in this is based on the same Cortex-A78 cores as the latest Jetson Orin, but it should have much better performance per dollar, because the cheaper Jetson variants are intentionally crippled, with many disabled cores and very low maximum clock frequency.

The GPU is weaker than on Jetson Orin, but it should be enough for many applications. There are also a couple of AI accelerators and dual simultaneous video camera capture at high resolution.

In any case, Genio 1200 is much faster than any Pi and also than RK3588. It has a similar performance to a flagship smartphone of 2021.

In the beginning, the SOMs and SBCs with Genio 1200 were expensive, in the $400 to $500 range, so while they were a better deal than Jetsons they still had worse performance per dollar than a SOM or SBC made with an Intel N100 CPU (or its Atom equivalent).

However not long ago Radxa NIO 12L has been announced, which uses Genio 1200 in a SBC of smartphone size at a price of not much over $100, similar to the older and slower RK3588.

Therefore it appears that it has become possible to obtain Genio 1200 at a decent price.

Perhaps if you contact MediaTek sales they might be more willing to provide design information than Qualcomm and they certainly have better prices.

Moreover the Radxa NIO 12L schematics and PCB assembly map are published on their site (a practice that was standard many decades ago, including for the leading US companies, but which now sadly survives mostly in Chinese companies), you can download them and study them to evaluate the effort of designing your custom board and possibly avoid any pitfalls in your design.

darksaints 20 hours ago [-]
This is really amazing information, I appreciate it so much!
echoangle 2 days ago [-]
I would stick to mounting SoMs on your PCB until you have a clear need for something else, integrating a SoC onto a PCB isn’t trivial, you need to take a lot of care of signal paths and RF properties.
user_7832 2 days ago [-]
Disclaimer, I'm not an expert in this field.

If you're hoping to commercialize and are a 1-man company, I hope you have a good USP/feature that's beyond "it's cheaper than the competition". If you do have that, I'd say just get an MVP out while keeping systems as platform agnostic as you can. Doesn't matter if you need an extra Jetson, if your product is good enough the extra cost shouldn't really deter customers.

Once you've started, hire/contract folks and streamline it out.

BTW Dev kits do exist for the snapdragon 8 gen 2/3 I think, but they're about $700 last I checked. If you're handy you could even try to reuse a rooted phone, if that's feasible for you.

netbioserror 2 days ago [-]
iGPUs have climbed high enough that they are overpowered for "normal" tasks like video watching and playing browser games. They can even do 1080p 60 FPS gaming to a decently high standard. And we've already proven via ARM that RISC architectures are more than ready for everything from embedded to high-power compute. What happens when iGPUs get good enough for 1440p 120 FPS high-end gaming? Game visuals have plateaued on the rasterization front. Once iGPUs are good enough, nobody will have much reason to get anything other than a tiny mini PC. The last frontier for GPUs will be raw supercomputing. Basically all PCs from there on out will just be mini machines.

The next few years will be very interesting.

hbn 2 days ago [-]
Don't worry, no matter how good hardware gets, you can rest assured bad software will push it to the limits and further push the minimum system requirements to run anything as basic as a note taking app or music player.

https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law

banish-m4 2 days ago [-]
Sigh. The problem with fast hardware is it enables ever worse software development techniques. Perhaps we need 80286-level hardware performance levels again and then unlimited performance for 6 months once every decade.
fidotron 2 days ago [-]
For the near future actually using this power does cause a notable battery hit. We reached the point a while back with mobile 3D where you actively have to hold yourself back to stop draining the battery as so many devices will keep up frame rate wise to a surprising degree.

Mobile phones have taught us another annoying lesson though: edge compute will forever be under-utilized. I would go so far as to say now a huge proportion of the mainstream audience are simply not impressed by any advances in this area, the noise is almost entirely just developers.

callalex 2 days ago [-]
Increased edge compute has brought a lot of consumer features if you take a step back and think about it. On-device, responsive speech recognition that doesn’t suck is now possible on watches and phones without a server round trip, and a ton of image processing such as subject classification and background detection is used heavily. Neither of those applications were even possible 5-7 years ago and now even the cheap stuff can do it.
jsheard 2 days ago [-]
Memory bandwidth is the major bottleneck holding iGPUs back, the conventional approach to CPU RAM is absolutely nowhere near fast enough to keep a big GPU fed. I think the only way to break through that barrier is to do what Apple is doing - move all of the memory on-package and, unfortunately, give up the ability to upgrade it yourself.
15155 2 days ago [-]
I would rather be in a world where we have skilled hot-air rework shops in major cities (like Shenzhen has) that can cheaply do this upgrade.

The speed of light isn't changing, nor is EMI. Upgradability is directly at odds with high-speed, RF-like, timing-sensitive protocols. "Planned obsolescence!!!" - no: electrical reality.

bluGill 2 days ago [-]
The upgrade would be replace the iGPU including the memory on board.

Upgrades are not really environmentally good. They might be the best compromise, but the parts you take off still are discarded.

throwaway11460 2 days ago [-]
But it's still better than discarding the whole thing, isn't it?
paulmd 2 days ago [-]
Why discard anything at all? If the laptop is sufficiently durable it should be able to endure multiple owners over its functional lifespan.
talldayo 2 days ago [-]
Well one day the SSD will fail, and then your options are as follows:

1) reflow the board with a steady hand

2) bin it

The overwhelming majority of customers will not choose option 1.

throwuxiytayq 2 days ago [-]
It often takes a very slight upgrade to give the hardware a whole new life. Plenty of macbooks with 8GB of RAM out there.
throwaway11460 1 days ago [-]
People don't want to deal with the hassle of selling it. Most often they just discard it. I myself have many older phones in the drawer - theoretically fully capable, but finding a buyer, meeting with them, proving it works to them, etc is just not something I can do now.
1 days ago [-]
15155 2 days ago [-]
Why are they discarded?
doctorpangloss 2 days ago [-]
I am pretty sure cooling is holding back performance on iPhones and other cellphones though. It throttles after about 5m of typical high end 3D gameplay. That’s how it can be much faster than a Switch including using faster RAM than it but not really useful as a game console.
resource_waste 2 days ago [-]
The problem is there is still a limit. There is a reason we have GPUs.

I genuinely can't figure out if Apple has a point, or is doing something quirky so they can overcharge outsiders.

Given 0 other companies are really investing resources and Nvidia is the giant, I'd guess its the later. I see too many reddit posts about disappointed CPU LLM users who wanted to run large models.

faeriechangling 2 days ago [-]
Apple using a 512-bit memory bus on its Max processors is indeed the future of APUs if you ask me.

AMD is coming out with Strix Halo soon with a 256bit memory bus, on the high end we're also seen the niche ampere platform running Arm CPUs with 576-bit buses. PS5 uses a 256-bit bus and Series X a 320-bit bus, but they're using GDDR instead of DDR which increases costs and latency to optimise for bandwidth but there's no reason you couldn't design a laptop or steam deck that did the same thing. AMD has their MI300X which is using 192gb HBM3 over a 8192-bit bus.

I don't think it's just apple going this way, and I do think that more and more of the market is going to be using this unified approach instead of the approach of having a processor and coprocessor separated over a tiny PCIe bus with separate DDR/GDDR memory pools. With portable devices especially, more and more every year I struggle to see how the architecture is justified when I see the battery life benefits of ridding yourself of all this extra hardware. LLM inferencing also creates a nice incentive to go APU because the dual processor architecture tends to result in excessive bandwidth and anemic capacity.

Nvidia may be the giant yet if you look at what most gamers actually run on, they run on Apple APUs, Snapdragon APUs, and AMD APUs. PCs with a discrete CPU and separate Nvidia GPUs have become a relatively smaller slice of the market despite that market segment having grown and Nvidia having a stranglehold upon it. The average game developed is not developed to run on a separate CPU and GPU monster beast - they're designed to run on a phone APU - and something like a Max processor is much more powerful than required to run the average game coming out.

kllrnohj 2 days ago [-]
> Apple using a 512-bit memory bus on its Max processors is indeed the future of APUs if you ask me.

It's very expensive to have a bus that wide, which is why it's so rarely done. Desktop GPUs have done it in the past ( https://www.techpowerup.com/gpu-specs/?buswidth=512%20bit&so... ), but they all keep pulling back from it because it's too expensive.

Apple can do it because they can just pay for it and know they can charge for it, they aren't really competing with anyone. But the M3 Max is also a stonking huge chip - at 92bn transistors it's significantly bigger than the RTX 4090 (76bn transistors). Was a 512-bit bus really a good use of those transistors? Probably not. Will others do it? Probably also no, they need to be more efficient on silicon usage. Especially as node shrinks provide less & less benefit yet cost increasingly more.

faeriechangling 2 days ago [-]
512-bit is probably a bit extreme, but I can see 192-bit and 256-bit becoming more popular. At the end of the day, if you have a high-end APU, having a 128-bit bus is probably THE bottleneck to performance. It's not clear to me that it makes sense or costs any less to have two 128-bit buses on two different chips which you see on a lot of gaming laptops instead of a single 256-bit bus on one chip for the midrange market.

M3 pro only used a mere 37 billion transistors with a 192-bit bus, so you can get wider than 128-bit while being economical about it. I'd love for there to be a 512-bit Strix Halo but it probably won't happen, it probably does not make business sense.

I don't know if the comparison to GPUs necessarily tracks here because the price floor of having 8 chips of GDDR is a lot higher than having 8 chips of DDR.

kllrnohj 24 hours ago [-]
Sure, which is why you see other midrange APUs use >128-bit buses, eg PS4 & PS5 are both 256-bit buses w/ GDDR.

Similarly, ultra high end APUs like the Xeon Max are doing on-package HBM for truly monstrous memory bandwidth numbers.

The thing is that nobody else is really trying to make anything like an M3 Pro, and I don't know if they even will. It's a really weird product. Big dies are expensive, hence everyone pushing towards chiplets. A really great spot to split up dies is between compute units that are largely independent - which the CPU & GPU actually are. There's a few workloads where unified memory helps, but most don't. So splitting those apart makes a ton of sense still. Then also if you push them hard to squeeze out all the performance you can, they both get very hot - so you want them physically far apart for cooling reasons. At which point you might as well just give them their own memory which can also be further specialized for their respective needs as one is latency biased and the other bandwidth biased. And now you're just back to traditional desktop architecture - it still makes just way too much sense for the high end.

It makes sense for Apple since they focus almost exclusively on laptops and you get exactly the single mix of CPU & GPU they decide, just like it does for consoles where again they have a midrange power budget & a single CPU/GPU configuration over millions of units. But as soon as different product specializations show up and different workload demands are catered to, coupling the CPU & GPU like that kinda doesn't make sense?

nsteel 2 days ago [-]
But isn't that GDDR option something like 3-4x the bandwidth? So you'd fit far fewer chips to hit the same bandwidth.
resource_waste 2 days ago [-]
>Nvidia may be the giant yet if you look at what most gamers actually run on, they run on Apple APUs, Snapdragon APUs, and AMD APUs.

Wat

This is like extremely incorrect.

Its Nvidia.

Yikes, your technoblabble at the start seemed like smart people talk but the meat and potatoes is off base.

ProfessorLayton 2 days ago [-]
Is it incorrect though? Mobile alone is 85B of the 165B market [1], not to mention that Nintendo's Switch is basically an android tablet with a mobile chipset.

[1] https://helplama.com/wp-content/uploads/2023/02/history-of-g...

resource_waste 2 days ago [-]
Yes, we arent using mobile gaming as an indicator of GPU growth/performance.

This seems like some way to be like 'Well tecknicaklly', to justify some absurd argument that doesnt matter to anyone.

ProfessorLayton 2 days ago [-]
>Yes, we arent using mobile gaming as an indicator of GPU growth/performance.

Who's "we" because big tech has absolutely been touting GPU gains in their products for a long time now [1], driven by gaming. Top of the line iPhones can do raytracing now, and are getting AAA ports like Resident Evil.

In what world is being over half of a 185B industry a technicality?. A lot of these advancements on mobile end up trickling up to their laptop/desktop counterparts (See Apple's M-series), which matters to non-mobile gamers as well. Advancements that wouldn't have happened if the money wasn't there.

[1] https://crucialtangent.files.wordpress.com/2013/10/iphone5s-...

sudosysgen 2 days ago [-]
To be fair, an Nvidia mobile chipset.

Also, much of mobile gaming isn't exactly pushing the envelope of graphics, especially by revenue.

talldayo 2 days ago [-]
> Nvidia may be the giant yet if you look at what most gamers actually run on, they run on Apple APUs, Snapdragon APUs, and AMD APUs.

They really don't? You're trying quite hard to conflate casual gaming with console and PC markets, but they obviously have very little overlap. Games that release for Nvidia and AMD systems almost never turn around and port themselves to Apple or Snapdragon platforms. I'd imagine the people calling themselves gamers aren't referring to their Candy Crush streak on iPhone.

> something like a Max processor is much more powerful than required to run the average game coming out.

...well, yeah. And then the laptop 4080 is some 2 times faster than that: https://browser.geekbench.com/opencl-benchmarks

faeriechangling 2 days ago [-]
I mean I see tons of shooters (PubG, Fortnite, CoD, etc) and RPGs (Hoyaverse) seeing both mobile and PC releases and they're running on the same engine under the hood. I'm even seeing a few indie games being cross platform. Of course some games simply don't work across platforms due to input or screen size limitations, but Unity/Unreal are more than 3/4s of the market and can enable a release on every platform so why not do a cross-platform release if it's viable?

I just see the distinction you're drawing as being arbitrary and old-fashioned and misses the huge rise of midcore gaming which is seeing tons of mobile/console/pc releases. I understand that a TRUE gamer would not be caught dead playing such games, but as more and more people end up buying APU based laptops to play their hoyaverse games, that's going to warp the market and cause the oppressed minority of TRUE gamers to buy the same products due to economies of scale.

talldayo 2 days ago [-]
I don't even think it's a "true gamer" thing either. Besides Tomb Raider and Resident Evil 8, there are pretty much no examples of modern AAA titles getting ported to Mac and iPhone.

The console/PC release cycle is just different. Some stuff is cross-platform (particularly when Apple goes out of their way to negotiate with the publisher), but most stuff is not. It's not even a Steam Deck situation where Apple is working to support games regardless; they simply don't care. Additionally, the majority of these cross-platform releases aren't quality experiences but gatcha games, gambling apps and subscription services. You're not wrong to perceive mobile gaming as a high-value market, but it's on a completely different level from other platforms regardless. If you watch a console/PC gaming showcase nowadays, you'd be lucky to find even a single game that is supported on iOS and Android.

> so why not do a cross-platform release if it's viable?

Some companies do; Valve famously went through a lot of work porting their games to MacOS, before Apple depreciated the graphics API they used and cut off 32-bit library support. By the looks of it, Valve and many others just shrug and ignore Apple's update treadmill altogether. There's no shortage of iOS games I played on my first-gen iPod that are flat-out depreciated on today's hardware. Meanwhile the games I bought on Steam in 2011 still run just fine today.

faeriechangling 2 days ago [-]
I just don't get this obsession with the idea that only recently-released "AAA" games are real games (or that the only TRUE gamers are those who play them) and it seems like the market and general population doesn't quite grasp it either. These FAKE gamers buy laptops too, and they probably won't see the value in a discrete GPU.

Besides, it's ultimately irrelevant because when Strix Halo comes out, it's going to have the memory bandwidth and compute performance to be able to play any "AAA" game released for consoles until consoles refresh around ~2028, which is 4 solid years of performance before new releases will really make them struggle. These APUs won't be competing with the 4080, but instead the 4060, which is a more popular product anyways. Discrete GPUs are in an awkward spot where they're not going to be significantly more future proof than an APU you can buy, but will suck more power, and will likely have a higher BOM to manufacture.

If you asked TRUE gamers if gaming laptop with Nvidia GPUs were worth it a few years ago, when they were already the majority of the market, they would have laughed in your face and pointed out how they didn't play the latest AAA games good and thus TRUE gamers won't buy them and to instead buy a cheap laptop paired with a big desktop.

talldayo 2 days ago [-]
> I just don't get this obsession with the idea that only recently-released "AAA" games are real games

It's really the opposite; I think obsessing over casual markets is a mistake since casual gaming customers are incidental. These are people playing the lowest-common-denominator, ad-ridden, microtransaction-laden apps that fill out the App Store, not Halo 2 or Space Cadet Pinball. It really doesn't matter when the games came out, because the market is always separated by more than just third-party ambivalence. Apple loves this unconscious traffic, because they will buy any garbage they put in front of them. Let them be gorged on Honkai Star Rail, while Apple counts 30% of their expenses on digital vice.

Again, I think it's less of a distinction between "true" and "casual" gamers, but more what their OEM encourages them to play. When you owned feature phones, it was shitty Java applets. Now that you own an iPhone... it's basically the same thing with a shinier UI and larger buttons to enter your credit card details.

I'll just say it; Apple's runtime has to play catch-up with stuff like the Steam Deck and even modern game consoles. The current piecemeal porting attempts are pathetic compared to businesses a fraction their size. Even Nvidia got more people to port to the Shield TV, and that was a failure from the start.

hamilyon2 2 days ago [-]
Is the best-selling game on every platform Minecraft casual or hardcore? What about Heartstone which allows you to play as much as you like, even once a year (and win), competitive and addictive: choose three.

It is as if the casual true dichotomy is a false one.

Dalewyn 2 days ago [-]
>The average game developed is not developed to run on a separate CPU and GPU monster beast - they're designed to run on a phone APU

I love how performant mobile games are on desktop/laptop hardware assuming good ports, Star Rail and Princess Connect! Re:Dive for some examples.

This will probably go away once mobile hardware gets so powerful there's no requirement for devs to be efficient with their resource usage, as has happened with desktop/laptop software, but god damnit I'll enjoy it while it lasts.

jsheard 2 days ago [-]
> I genuinely can't figure out if Apple has a point, or is doing something quirky so they can overcharge outsiders.

Well, Apple was already soldering the memory onto the motherboard in most of their Intel machines, so moving the memory on-package didn't really change their ability to gouge for memory upgrades much. They were already doing that.

ryukafalz 2 days ago [-]
If so I would like to see more of what the MNT Reform[0] is doing. You can have your CPU, GPU, and memory all on a single board while still making that board a module in a reusable chassis. There's no reason the rest of the device needs to be scrapped when you upgrade.

[0] https://mntre.com/modularity.html

gabrielhidasy 1 days ago [-]
Move some (but not all) memory on package? NUMA is not easy but it's an option.
papruapap 2 days ago [-]
I dont think. AMD is releasing a 256b width soon, we will how it performs.
kanwisher 2 days ago [-]
We keep making more intensive games and applications that tax the gpu more. LLMs have already kicked off another 5 year cycle of upgrades before they run full speed on consumer hardware
citizenpaul 2 days ago [-]
Isn't one of the issues with local LLM the huge amount of GPU memory needed? I'd say that we'll go a lot longer than 5 years before phones have 24+GB of VRAM.
jerlam 2 days ago [-]
It's very questionable whether consumers want to spend the money to run LLMs on local hardware; nor do companies want them to, instead of charging a subscription fee.

High-end PC gaming is still a small niche. Every new and more powerful gaming console does worse than the previous generation. VR hasn't taken off to a significant degree.

testfrequency 2 days ago [-]
You know, I agreed with this years ago and spent $4,000 on a SSFPC based on Intel NUC…only to have Intel drop NUC entirely. While NUC has integrated graphics, the major benefit was pairing it with a full sized GPU.

Now I’m stuck with an old CPU, and have little reason to upgrade the GPU.

I guess my main hope is it’s modular and seamless to upgrade small frame machines in the future, so I can keep a case standard for generations.

netbioserror 2 days ago [-]
It's my hope as well that maybe a new upgradeable standard for mini-PCs comes about. But keep in mind, Linux doesn't care much if the hardware configuration changes. One could conceivably just move their SSD and other drives to a new mini PC and have a brand new system running their old OS setup. If only Microsoft were so accommodating. The added benefit being how useful a mini PC can be over an old tower. Just buy a cheap SSD and: Give it to grandma; turn it into a NAS; use it as a home automation hub; use it as a TV media center; any number of other possibilities that a small form factor would allow. All things that were possible with an old tower PC, but far less convenient. And now that we're on a pretty stable software performance plateau (when we don't throw in browsers), newer chips can conceivably last much longer, decades, even.
Dalewyn 2 days ago [-]
>If only Microsoft were so accommodating.

You can use sysprep[1], though nowadays you don't even need to do that most of the time.

[1]: https://learn.microsoft.com/en-us/windows-hardware/manufactu...

Salgat 2 days ago [-]
8k-16k is the breakpoint where pixel density is high enough to no longer need anti-aliasing, so we have a ways to go, but once we hit that, we still have a long ways to go in rendering quality. Screenshots can look really nice, but dynamic visuals are still way behind realistic graphics you'd expect to see in movies.
MrBuddyCasino 2 days ago [-]
> What happens when iGPUs get good enough for 1440p 120 FPS high-end gaming?

They won't with the mainstream number of memory channels, where iGPUs are severely bandwidth starved. AMD Strix Halo wants to change that, but then the cost difference to a discrete GPU gets smaller.

They might cannibalise even more of the low-end market, but I'm not sure they will make mid-range (or high-end) dGPUs obsolete.

refriedbeans 2 days ago [-]
Even today, the 4090 with path tracing cannot achieve 1440p at 120 fps in Cyberpunk, and the 1% lows are not great. Also, high-resolution VR will require much more performance.(Unless we manage to get foveated rendering working effectively)
netbioserror 22 hours ago [-]
Maybe high-end is the wrong word. I mean the games people play commonly and regularly. Most of those can be played at 1440p ~90fps as far back as Turing or the Radeon 5000, I think. Most people have at least that at this point.
ethanholt1 2 days ago [-]
I was able to get Counter-Strike 2 running on an iGPU of a fourth gen Radeon chip. Don’t remember the name, but I got barely 10FPS and it was unplayable, but it ran. And it was amazing.
msh 2 days ago [-]
CS2 runs fine on a steam deck with its iGPU
beAbU 2 days ago [-]
Cloud gaming + a monthly subscription is probably going to be the end result. Inwon't be suprised if the next xbox generation is a streaming stick + wireless controller.

The writing is on the wall for me. A cheap streaming device + a modest games subscription will open the platform up to many who are currently held back by the cost of the console.

hollerith 2 days ago [-]
Hasn't that been tried by well-funded companies like Google starting over 10 years ago and failed?
0x457 2 days ago [-]
Stadia was doomed to fail:

- You have to buy all the games all over again

- people, that are familiar with google ways, didn't want to invest anything into stadia

- stadia controller, while nice, didn't want with anything, but stadia until recently (still doesn't work with ATV for some reason)

- Google own devices didn't support Stadia for absolutely no reason (you could have side-loaded the app, and it worked just fine)

xCloud always felt much better than Stadia when playing.

gabrielhidasy 1 days ago [-]
Didn't really fail, GeForce Now is still available and working well. Stadia was abandoned by Google but worked really well for the ones that had good internet and lived close enough to the data-centers.

The bad part is needing to live close to a big, expensive, GPU-heavy data-center, but if mobile GPU's are getting that good, maybe in the near future we could have single servers or small racks serving a few dozen to a few hundred players, that would be much easier to co-locate, grow as needed, and improve latency.

masterj 2 days ago [-]
Yes, but sometimes ideas come ahead of the infrastructure that would enable them, in this case fiber rollouts and geographically distributed data centers with GPUs. I've been really impressed by GeForce Now lately and XBox Cloud Gaming is a thing.
beAbU 2 days ago [-]
Yes and? Does that mean it can never be tried again?

If car makers gave up with EVs 30 years ago, and never tried improving, where would we be today? Because EVs are not the ICE replacement today, does that mean it won't be in 30 years' time?

All I'm saying is that the natural outcome for gaming is cloud + subscription. Maybe not the next xbox console, but possibly the one after next.

This was the inevitable outcome of music and video. Gaming is next.

We are going to own nothing and we will love it.

byteknight 2 days ago [-]
At the detriment of experience.
nothercastle 2 days ago [-]
GeForce now works great when it works but they are under provisioned in capacity so there are times when their data center works really poorly.

The biggest issues is probably long load times. A suspend state to drive function would be a big boost.

Then it’s probably the oversold capacity.

Then it would be only 80% compatibility with all titles.

hyperthesis 2 days ago [-]
There's still real-time ray-tracing.

Cosmetic physics can eat up more, but as a gameplay mechanic, user interaction with realistic physics are too unpredictable (beyond a gimmick). Of course, we could then get actual computer sports.

saidinesh5 2 days ago [-]
What makes it even more interesting is more and more people are starting to enjoy AA games more than AAA games for various reasons .. (cost, more experimental/novel game play mechanics/story)

It feels like the next few years would be really glorious for handheld gaming...

nothercastle 2 days ago [-]
Basically every AAA game has been bad recently. Only interesting games are in the AA and indie categories
Dalewyn 2 days ago [-]
>What happens when iGPUs get good enough for 1440p 120 FPS high-end gaming?

The video card goes the down the same road as the sound card.

Given how expensive video cards have gotten thanks first to cryptocurrency mining and now "AI", it's only a matter of time.

maxglute 2 days ago [-]
More mobile desktop mode plz.
wiseowise 2 days ago [-]
> What happens when iGPUs get good enough for 1440p 120 FPS high-end gaming?

Hell yeah, we celebrate.

taskforcegemini 2 days ago [-]
but this is still far from good enough for VR
stefan_ 2 days ago [-]
The same as always, we crank up the resolution? Sorry to say but your 1080p and 1440p is already a few years behind, so "the next few years" aren't terribly exciting.
kllrnohj 2 days ago [-]
> Qualcomm’s OpenCL runtime would unpredictably crash if a kernel ran for too long. Crash probability goes down if kernel runtimes stay well below a second. That’s why some of the graphs above are noisy. I wonder if Qualcomm’s Adreno 6xx command processor changes had something to do with it. They added a low priority compute queue, but I’m guessing OpenCL stuff doesn’t run as “low priority” because the screen will freeze if a kernel does manage to run for a while.

Very few (if any) mobile class GPUs actually support true preemption. Rather they are more like pseudo-cooperative, with suspend checks in between work units on the GPU. Desktop GPUs only got instruction-level preemption not that long ago - Nvidia first added it with Pascal (GTX 10xx), so mobile still lacking this isn't surprising. It's a big cost to pay for a relatively niche problem.

So the "crash" was probably a watchdog firing for failing to make forward progress at a sufficient rate and also why the screen would freeze. The smallest work unit was "too big" and so it never would yield to other tasks.

kimixa 2 days ago [-]
> Desktop GPUs only got instruction-level preemption not that long ago

And even then it's often limited to already scheduled shaders in the queue - things like the register files being statically allocated at task schedule time means you can't just "add" a task, and removing a task is expensive as you need to suspend everything, store off the (often pretty large) register state and any used shader local data (or similar), stop that task and deallocate the shared resources. It's avoided for good reason, and even if it's supported likely a rather untested buggy path.

If you run an infinite loop on even the latest Nvidia GPU (with enough instances to saturate the hardware) you can still get "hangs", as it ends up blocking things like composition until the driver kills the task. It's still nowhere near the experience CPU task preemption gives you.

jimmySixDOF 2 days ago [-]
Nice. Should probably include the year in the title (2019?) but it's interesting to me because even the latest 2024 XR2+ Gen2 is based on the Snapdragon 865 with almost the same iGPU which may mean there is a technical benefit (vs just extra cost) from not using the newer 888s
TehCorwiz 2 days ago [-]
I'm not disputing your suggested date, but I can't find it anywhere in the article. Can you explain where you got it?
adrian_b 2 days ago [-]
The article is new, only the tested CPU+GPU is from 2019.

Snapdragon 855 was indeed the chipset used in most flagship Android smartphones of 2019. I have one in my ASUS ZenFone.

Based on the big ARM cores that are included, it is easy to determine the manufacturing year of a flagship smartphone. The cheaper smartphones can continue to use older cores. Arm announces a core one year before it becomes used in smartphones.

  Cortex-A72 => 2016
  Cortex-A73 => 2017
  Cortex-A75 => 2018
  Cortex-A76 => 2019 (like in the tested Snapdragon 855)
  Cortex-A77 => 2020
  Cortex-X1 => 2021
  Cortex-X2 => 2022
  Cortex-X3 => 2023
  Cortex-X4 => 2024
TehCorwiz 2 days ago [-]
Ok. Cool, but the title makes it clear which one it's talking about. Usually when a date is added to the title it's in order to clarify when the article was published, not necessarily when the object of the article began to exist.
jamesy0ung 2 days ago [-]
I didn't realise the cores in the Pi 5 were 4 years old already. Surely if some vendor like qualcomm or mediatek released a sbc with decent software and recent cores, they could sweep the floor.
nsteel 2 days ago [-]
They are 16nm, so yeh, "old". Newer tech means a newer node and considerably more expensive. At which point, why are you buying an SBC over some small intel/amd thing?
helf 2 days ago [-]
The year is... He posted it yesterday lol.
crest 2 days ago [-]
Did you read the date of publication of the article benchmarks or did you just blindly assume the author published them on the day the embargo on publishing benchmarks for the chip fell?
ffrrrrtu 2 days ago [-]
[flagged]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 15:52:21 GMT+0000 (Coordinated Universal Time) with Vercel.