carbonatedmilk 131 days ago [-]
Pull quote:

We run compute on the cloud and have no real-time requirements. I was asked by the Chair, “How much complexity increase is acceptable?”

I was not prepared for the question, so did some quick math in my mind estimating an upper bound and said “At the worst case 100X.”

The room of about a hundred video standardization experts burst out laughing. I looked at the Chair perplexed, and he says,

“Don’t worry they are happy that they can try-out new things. People typically say 3X.” We were all immersed in the video codec space and yet my views surprised them and vice versa.

carbonatedmilk 131 days ago [-]
This makes entire sense to me: When you're going to stream the episode of Stranger things to 15million people, who really cares what the one-time cost to encode is? Surely you'd take algorithmic complexity that was 100x or even 1000x the baseline if it provided you with even a few kB of bandwidth savings?
lostcolony 131 days ago [-]
You still care, since at some point the cost of the extra encoder-hours used is going to outpace the bandwidth savings. I'm guessing she mentally assumed 'at no bandwidth savings, how much margin do we have?', since the amount saved wasn't ever quantified.
derf_ 131 days ago [-]
There are a couple of things that make Netflix somewhat unique.

1) They have a fairly limited catalog (contrast with the constant ingestion of new content by Youtube, for example)

2) The cheapest way for them to ensure they have sufficient capacity to handle peak loads leaves them with a lot of extra compute during non-peak times. That excess compute is essentially free.

ktta 131 days ago [-]
>That excess compute is essentially free.

Since they host on AWS, wouldn't they just scale down during non-peak times and save money?

svachalek 131 days ago [-]
They do, but as any large AWS customer you want to reserve a lot of capacity, so that it's there when you need it and also it's a lot cheaper than spot usage. So, scaling down below the reserved capacity doesn't save you much if anything.
majewsky 131 days ago [-]
The AWS-hosted part is not the CDN afaik, which will probably make up the vast majority of their infrastructure.
walrus01 131 days ago [-]
Speaking as an ISP: Netflix traffic comes from the Netflix ASN and peering session. Their globally distributed CDN/caching/video file storage servers are not on AWS to the best of my knowledge. Netflix traffic does not come from our peering with Amazon's ASN.
ktta 131 days ago [-]
If you're talking about OpenConnect, then I think they are really geared to just be a CDN and not much else.
Tarq0n 131 days ago [-]
Netflix has an internal spot market for compute, letting them use their reserved instances somewhat optimally.
jadedhacker 131 days ago [-]
For what it's worth, I don't know what YouTube's actual constraints look like, but it doesn't seem implausible that they might still be open to reencoding their top 10k video. After a billion views, you might get some substantial savings.

EDIT: After all, they did implement VP8/9 ;)

gowld 131 days ago [-]
Re #1: It's not catalog size that matters; it's more like revenue per minute of video, or views per minute of video

Re #2: This is a fairly standard batch-vs-online compute mix tradeoff faced by large enterprises.

martincmartin 131 days ago [-]
It's not bandwidth savings. It's growing the user base. There are lots of people on low bandwidth connections. Maybe now they don't find Netflix watchable, because the video quality is to crappy. But if you can fit higher quality video into the same bandwidth, they'd become users.

I agree with your overall point though, that at some point the extra cost for encode isn't worth it.

alephnil 131 days ago [-]
Netflix is in fact working very hard to optimize the quality per bandwidth at low bitrates. Megha Manohara from Netflix had a conference talk about it last year:

https://www.youtube.com/watch?v=S9E9xUbcYAM

Their goal is to have video of acceptable quality at 250 kbit/s, which makes Netflix an option for a lot more people worldwide.

spenczar5 131 days ago [-]
That's part of it, but I think you'd be astonished at how much IP transit can end up costing when you're distributing high-quality video. It's pretty staggering how much bandwidth can cost. I would not be surprised in the slightest if at least 1/4 of Netflix's total engineering spend is on network transit, so any savings that are per-video can have a huge impact.

Source: I work at a company that distributes a lot of high-quality video :)

ksec 131 days ago [-]
Exactly this. I don't know how many times this need to be repeated. If your original encoding cost were 1 Million, 5x or even 10x is only 9 million more. At 100x that is 99 million. 1000x is a billion. Are you really willing to spend a billion on a codec that saves you 50% transfer and a fee million of royalty cost?

Especially when Netflix already has the Open Alliance in many ISP DC. Bandwidth cost saving are much less that most estimate.

However I do understand her point, she should ask I give you 100x complexity headroom, can you get me 80% reduction in bitrate compare to HEVC.

vetinari 131 days ago [-]
Lowering bandwidth requirements not only lowers bandwidth bills, but also opens new markets. You suddenly have subscribers that you otherwise woudn't have. There's a long tail of people with slow connections.
131 days ago [-]
rayiner 131 days ago [-]
Netflix is probably at the extreme end. Remember: the same codecs are used to compress the videos your cell phone takes.
haikuginger 131 days ago [-]
Hardware implementations of codecs can be made that perform encode at low power if a certain level of quality/efficiency can be given up.

This is already the case for AVC/HEVC on mobile devices where storage and power considerations overwhelm the possible quality and coding efficiency advances that could be available from a highly-intensive, highly-tunable CPU-based encode.

In comparison, if you look at digital cinema, video is dumped as raw, unencoded pixels onto high-speed, high-power SSDs without any compression; this way, the original image can be manipulated losslessly before going through an intensive encode process for an optimal quality/space balance for end-user media delivery.

tatersolid 131 days ago [-]
The very popular RED digital cinema cameras have always shot in compressed form using a low-latency wavelet-based codec (presumably with some ASIC or GPU acceleration on board).

http://www.red.com/learn/red-101/redcode-file-format

http://www.theblackandblue.com/2012/05/02/shooting-red-epic-...

What cameras shoot uncompressed 8K HDR video? That doesn’t even seem possible given current SSD bus speeds. Note that many high-quality lossless formats are actually still compressed (they subtract current frame data from the previous frame and then apply LZ4 for example).

vardump 130 days ago [-]
> What cameras shoot uncompressed 8K HDR video? That doesn’t even seem possible given current SSD bus speeds.

12-bit bayer RGB matrix 8k @24 fps: 7680 * 4320 * 12bit * 24/s = 1.2 GB/s uncompressed. Currently SSDs go up to 3 GB/s. And you could have a RAID 0 array made out of multiple SSDs.

So it is possible. Entirely another matter whether it makes sense.

One hour of video at this bitrate requires 4.3 TB.

fandango 128 days ago [-]
Did you forget that there are multiple color channels? 12 bit should be 36 bit so even more is needed.
vardump 128 days ago [-]
No. That's how practically all of the cameras work. Google RGB bayer matrix.

There's just one color component per pixel, thus 12-16 bits per pixel is enough.

rayiner 131 days ago [-]
Right, but designing a codec that is scalable in complexity to accommodate real-time hardware encoding on one end, and 100x Netflix encoding on the other end probably isn't easy.
Twirrim 131 days ago [-]
Why does it have to be the same codec?

Use the right tool for the job.

Slartie 131 days ago [-]
The encoding part may well differ, but having the same bitstream format and thus being able to use the same decoders for a very wide range of efficiency/time trade-offs has huge advantages. The necessity of using hardware implementations of video decoders in many scenarios requires a big initial investment into a new codecs' decoding part before even the first frame of video may be displayed by it.

This is why pretty much anyone has standardized on h264, and now on h265, as the video codec of choice, despite using very different encoder software or hardware to realize hugely differing trade-offs on the encoding side of things.

maherbeg 131 days ago [-]
Netflix also encodes to hundreds of combinations of bit rates and resolutions. If improving the quality for a given bitrate lets them reduce some of the complexity (by shrinking the combinations needed to serve users), then there are pretty nice advantages there.
bufferoverflow 131 days ago [-]
Bandwidth is extremely cheap, so 15 million by a few kB is just 15+ GB, which costs pennies in terms of bandwidth. That doesn't buy you that much encoding compute time.
kemitche 131 days ago [-]
Netflix isn't necessarily trying to reduce the bandwidth on _their_ side. They want to be able to deliver higher quality to users on lower bandwidth connections. The better that goes, the larger their prospective customer base.
dick_sucker2 131 days ago [-]
That's a big issue in rural areas with microwave links. Everyone who has one of these links is limited by bandwidth and they all want to watch Netflix.
icebraining 130 days ago [-]
I wonder if there's a market for a "Netflix ISP" - essentially setting up an antenna for local WAN and a microwave antenna, and then asking Netflix for one of their caching appliances.
Yetanfou 131 days ago [-]
Bandwidth can be cheap, all depending on the connection you're using. It can also be extremely expensive:

[ Dutch ] https://www.telegraaf.nl/vrij/2167192/zoontje-duitsers-kijkt...

If your Dutch isn't up to this let me summarise: A German family took a boat trip from Kiel to Oslo. Their 12yo son watched a few movies (or clips, no idea) while on board. Total data consumption was 470 MB. They got a bill for €12.000,- which comes down to just over €25 per MB.

Bandwidth can be very expensive as well. Of course this is an extreme case but any byte shaved off an encoded stream can end up saving some people a lot of money.

phlakaton 131 days ago [-]
For you, perhaps, it's cheap. For the use cases they are talking about, it's expensive and limited.

Agreed that, if all they can do is shave a "few kB" off a video, it's probably not worth the investment. But what if they can buy a 5–10% bandwidth improvement that actually looks better on the device they're targeting?

jascenso 131 days ago [-]
Note that they are speaking about encoding complexity. The point is that decoder complexity increases are also needed (although, not as much !) and every time that encoder complexity has increased in the past, decoder complexity also grew.

The really interesting question is how much decoder complexity increase is acceptable.

nine_k 131 days ago [-]
Suddenly they're in the same space as 3D-rendered movies: they can spend a lot of time once to make the result good for a million of spectators.

It, of course, only works if you render widely replicated material. Even an average Youtube video is likely not like that, to say nothing of the encoding done in a consumer-grade camera / phone.

lev99 131 days ago [-]
> encoding done in a consumer-grade camera / phone.

Just brainstorming here, but encoding local photos/videos might be an acceptable task for a cellphone to do when it is fully charged and plugged in.

admax88q 131 days ago [-]
That would severely limit the amount of photos and video you can capture between charging sessions. Most phones don't have a ton of storage, and raw video/photos are huge.
icebraining 130 days ago [-]
Plus storage might not be fast enough to store raw video in realtime. And you'd kill it much sooner with all the writing (larger raw files + reencoded files).
da_chicken 131 days ago [-]
I'm someone with an intermediate level of coding experience, but I have no idea what exactly is meant here by "encoding complexity". What's the difference between 3X and 100X? What's the encoding complexity of h.264 vs MPEG-2? Is it just saying that they're OK with the encoding algorithm taking 100 times as much processing power or taking 100 times as many processor cycles to compress a given data stream?
dingo_bat 131 days ago [-]
It seems to me that despite the tech, Netflix video quality is really horrible. Youtube is consistently much higher quality, even though Netflix's own "fast.com" tells me my connection is capable of 75Mbps downstream. Which should be enough for crisp 1080p. Multiple times I've been so frustrated with Netflix's quality that I've started a movie, stopped it because it looked like something out of a VCR, torrented the true 1080p version and watched that.
c2h5oh 131 days ago [-]
Most likely you're watching in 720p.

In Chrome, Firefox and Opera you're getting 720p max.

To get 1080p you need to be watching in Internet Explorer, Safari or on a Chromebook.

To get 4k you need to be watching in Edge and have a 7xxx or 8xxx Intel CPU and HDCP 2.2 enabled display.

Source: https://help.netflix.com/en/node/23742

shawn 131 days ago [-]
If that’s true, that’s pretty unfortunate. YouTube does 1440p@60fps now.
izacus 131 days ago [-]
Yep, but YouTube doesn't require full DRM support.
cc-d 131 days ago [-]
Requiring protocol level DRM to be included in video/music streaming technologies has always baffled me.

What's the point? Even with theoretically perfect protocol level DRM, the consumer eventually has to be able to see/hear the protected content. If the frames of the video are displayed on screen, and the audio played through the speakers, the output can be recorded and preserved, period.

Do the people in charge of making these decisions not realize that whatever convoluted DRM technologies they pay to be developed and implemented will always be defeated by a $30 camcorder from the 90s?

izacus 131 days ago [-]
After working in the broadcasting industry for a few years... the answer is that the management of companies in the chain simply doesn't care. DRM absolves them of any kind of piracy blame and they happily pile blame around on people who push back on DRM. That's about it to the story - actual results never come into the reasoning.
cc-d 131 days ago [-]
I had always considered this the most likely explanation.

The tendency for humans to implement elaborate but ineffective security theaters in an attempt to convince people they're protected from fundamentally unprotectable threats is as old as society itself.

As a consequence, chrome can't watch netflix videos at full quality, anybody who flys in the US has to remove their shoes, and my child has to wear a clear backpack to school. I wish we'd stop playing these silly games of pretend which degrade the quality of life of everyone.

da_chicken 131 days ago [-]
> Requiring protocol level DRM to be included in video/music streaming technologies has always baffled me.

> What's the point?

It's the same reason you put a lock on the door to your house. You know the lock is easily bypassed by tools. The windows are easily broken. The door can probably be easily forced open. But you still lock the door when you leave in the morning.

Locks are about keeping casual theives as honest people. DRM is about keeping casual pirates as honest customers. It's about making it just difficult enough to copy that most people will consider it not worth the bother. It's about saying, "You must be this determined to break the law."

allenz 131 days ago [-]
> making it just difficult enough to copy that most people will consider it not worth the bother

I don't think that the analogy holds up. The deterrence only applies to uploaders and not consumers, because the processes for removing DRM and distributing videos are independent. It just takes one person to make a video available to the entire world. It does seem like torrents are available for most of the Netflix catalogue, so I'm skeptical that DRM is useful for popular shows.

izacus 131 days ago [-]
The analogy is good, but the reasoning is a bit off. Let me fix it - You still lock your door in the morning because you know that your insurance company will not pay out your theft insurance if anything gets stolen. DRM is mandated all the way up the chain (and noone really cares if it works).
TremendousJudge 131 days ago [-]
They realize, of course. It's not the true reason for DRM, which is squashing competition[1]

[1] https://boingboing.net/2017/09/18/antifeatures-for-all.html

syrrim 131 days ago [-]
Here's the real rub: the videos have already been pirated at full quality, and are available online for free. They're doing a poor job of defending something that has already been breached.

That all said, camcorders don't get you anywhere: the goal isn't ripping the video, it's ripping the high quality video. DRM can theoretically defend against that, but you'd need to control the whole stack, incl. hardware, incl. the monitor and speakers.

garaetjjte 128 days ago [-]
>but you'd need to control the whole stack, incl. hardware, incl. the monitor

Even then, you could tear monitor apart and grab LVDS signals to panel.

vetinari 131 days ago [-]
DRM works on multiple fronts, causal piracy is one. The other is the control over player production, what features it will have, who can and who cannot make it, etc. Those who can make good deals, will get an advantage.

Someone above linked a help page that says, that for 4K Netflix you need Edge and Intel Kaby Lake or newer. Do you think that it was free for Microsoft or Intel, or some good deal sweetened that?

isostatic 131 days ago [-]
> causal piracy

Not sure if you mean causal or casual. Casual is going to piratebay and downloading.

I would agree that DRM and other anti-consumer things (unskippable things on dvds, adverts accusing you of pirating the dvd you've actually bought, etc) does cause piracy though

robocat 131 days ago [-]
Recording the analog output has a lower quality than the original stream for 3 main reasons:

1. Marketing and psychology: Viewers want to believe they are viewing the original, not a degraded copy.

2. Unfaithful copy: Analog output and analog input introduce errors. LCDs use a variety of tricks to improve resolution such as spacial and temporal dithering. Also you can't use a normal camera to record a monitor because of aliasing (of pixels and non-genlocked frame rates).

3. Encoding noise. The encoding of the original is based on the higher quality original, and carefully optimised for the least visual artifacts. Any re-encoding also has to deal with noise introduced by the copying process, and with the noise introduced by the original encoding. This noise noticeably reduces the quality of a copy.

therein 131 days ago [-]
If anyone is wondering, this whole concept referred to as the analog hole. [1]

[1] https://en.wikipedia.org/wiki/Analog_hole

oldgeezr 131 days ago [-]
There are many answers to your question but for starters:

- There is no perfect security. There is a notion of raising the expense of piracy to a level that it effectively does not matter.

- IIRC, for instance, rooted Android loses support for... Widevine? So you can't really use Netflix on a rooted device where you could easily steal frames from the video buffer. Yeah, you can rig up a nice camera system and record analog off the display. Nothing they can do about that. They also may insert watermarks to let them know who recorded it.

squeaky-clean 131 days ago [-]
I bought an HDMI cloner box the same day Netflix announced they were adding this DRM.

I actually haven't even taken it out of the box yet. But it just feels good to know their DRM is pointless.

TylerE 131 days ago [-]
They don't care because once the signal hits analog the quality will be much lower.
dingo_bat 131 days ago [-]
But why should I, as a viewer, care about that?
izacus 131 days ago [-]
Because content providers are making you care about that. They _demand_ that you're prevented from seeing high quality TV if your platform doesn't fully lock you out.
isostatic 131 days ago [-]
Do netflix limit their own shows?
izacus 131 days ago [-]
Yes, they apply full DRM to their own content as well.
Latty 131 days ago [-]
The answer is probably "because your content is locked on that provider", but that's a less and less valuable point.

Obviously, BitTorrent was the big reason in the past, but now the reality is that there is a lot of competition in the video space - you aren't just competing with films and tv shows, but youtube videos, twitch streams, etc...

bubblethink 131 days ago [-]
It most likely does for paid offerings. I doubt you'll be able to watch a paid 4K movie on youtube/google movies unless you satisfy their highest widevine requirements. I think youtube's 1080p requirements may be more relaxed than netflix though. Regular yt has no DRM of course.
CyberDildonics 131 days ago [-]
You can actually watch 8k video off of youtube:

https://www.youtube.com/watch?v=Mn24cQLqAE4

shawn 131 days ago [-]
Wow! Even at my monitor's 1024x768, the 8k video looks amazing! So much better than 1080p!

Something something nyquist something

wilun 131 days ago [-]
I'm not 100% sure if you tried to be ironic or if you really reported that the video was better in 8k than FHD.

Because actually, it can be.

Although 8k is overkill, 4k will be enough, and 1440p nearly ok on your old 1024x768 monitor. Typically video encoding does some subsampling on some color components. If you play 4k content on a FHD screen, the quality can be better because you will have no subsampling on your FHD screen, compared to mere FHD encoding (in most cases).

shawn 131 days ago [-]
It was a stab at irony.

True, but the video is already subsampled. That's how it was able to be uploaded at 1080p at all, since the source video is 8k. So 8k vs 1080p shouldn't make any difference on monitors less than M-by-1080 resolution.

wilun 131 days ago [-]
The video is typically subsampled at encoding at capture resolution, but it is also subsampled at other encoding resolutions. Because the whole point of subsampling is to be taken into account during encoding, and encoding itself needs not to vary depending on whether the source was downscaled or not.

So video codecs most of the time work with some subsampled chroma components. So your encoded 1080p might be able to render after decoding only e.g. 540 lines of those components, while with the 4k stream it might be: 2160/2 => back to 1080.

Edit: but to be clear, I'm not advocating for people to choose 2x stream and start watching 4k on FHD screens in general, that would be insane. Chroma subsampling is used because the eye is less sensitive to those colors.

shawn 131 days ago [-]
I would be interested in a double blind experiment confirming that their specific implementation of chroma subsampling is even detectable. The eye is much less sensitive to colors than intensity, as you point out. If it were perceptible, I think the codec designers wouldn't feel it was an acceptable tradeoff.

So your encoded 1080p might be able to render after decoding only e.g. 540 lines of those components, while with the 4k stream it might be: 2160/2 => back to 1080.

I'm not sure that's accurate -- whatever downscaling process was used to convert from 8k to 1080p on Google's servers is probably the same process to convert from 8k to 1080p in the youtube player, isn't it? At least perceptually.

I would agree that if they convert from 8k (compressed) to 4k (compressed), then 4k to 1080p (compressed), then that would introduce perceptible differences. But in general reencoding video multiple times is fail, so that would be a bug in the encoding process server side. They should be going from the source material directly to 1080p, which would give the encoder a chance to employ precisely the situation you mention.

Either way, you should totes email me or shoot me a keybase message. It's not every day that I find someone to debate human perceptual differences caused by esoteric encoding minutiae.

verall 131 days ago [-]
It's not just that the eye is less sensitive to chroma.

Although your 4:2:0 subsampled 1080p video only has 540x960 pixels with chroma information, the decoder should be doing chroma upsampling, and unless its a super simple algorithm it should be doing basic edge detecting and fixing the blurry edges chroma subsampling is known to cause. I posit that even with training, without getting very very close to your screen you wouldn't be able to tell if the source material was subsampled 4:2:0, 4:2:2, or 4:4:4.

The truth is that generally people DO subjectively prefer high resolution source material that has been downscaled. Downscaling can clean up aliasing and soften over-sharp edges.

People who watch anime sometimes upscale video beyond their screen size with a neuron-based algorithm, and then downscale to their screen size, in order to achieve subjectively better image quality. This is even considering that almost all 1080p anime is produced in 720p and then upscaled in post-processing!

foobarrio 129 days ago [-]
It will make different if the encoding compression is different. Not all 1080p streams are equal. A 1080p FHD blueray is around 30mbps. I've read 20mbps h264 being almost indistinguishable from 30mbps blueray. In my own personal test using some Starwards bluerays, a 10mbps looks pretty good compared to the blueray. On YouTube I've seen anywhere from 2-4mbps being used for 1080p and 7+ used for 4k.

A 4k or 8k stream is coming into your computer at 10+mbps and being downsampled to 1080p can very contain more information than a lower quality 1080p stream coming into your computer at 4mbps even after downsampling.

brigade 131 days ago [-]
Even ignoring chroma, video compressed at streaming bitrates is nowhere near the nyquist limit for a given resolution.

In addition, YouTube generally encodes 4k at like 5-6x the bitrate of their 1080p encodes (codec for codec), rather than merely 3-4x higher which would be closer to the same quality per pixel.

So yeah, YouTube's 4k is better on a 1080p screen than their 1080p stream.

jakobegger 131 days ago [-]
There is already an 8k display on the market.
isostatic 131 days ago [-]
SMPTE recommend a viewing angle of 30 degrees, which matches THX (26-36).

Assuming you have 20/20 vision, You won't be able to tell the distance between 4K and higher unless your screen fills more than about 40 degrees, in which case you are losing detail at the edges.

An 8K monitor on your desk may make sense -- if you're say 3' away from it and it's say 60", you'll start noticing a difference between 4k and 8k, however you will be focused on one area of the screen, rather than the entire screen.

Even with 4K, for most people watching television/films the main (only) benefits are HDR and increased temporal resolution (60p vs 60i/30p)

All the 8K stuff I've seen comes with 22.2 sound to help direct your vision to the area of the screen wanted. It certainly has applications in room sized screens where there are multiple events going on, and you can choose what to focus on (sport for example).

If you were to buy a 32" 8K screen - say the UP3218K, about 28" wide, to get the benefit of going above 4K you would need to be sat within about 30 inches. At 30" you would have the screen filling about 50 degrees of vision. Even an immersive film should only be 40 degrees.

131 days ago [-]
fabian2k 131 days ago [-]
It's even more complicated in my experience. You get up to 1080p in Edge compared to only up to 720p in Chrome or Firefox, but you still don't get 1080p guaranteed. For me it varied depending on the specific movie or TV series, and I often still didn't get 1080p in Edge, and even only 480p sometimes (as confirmed with the debug overlay). Only switching to the Netflix Windows 10 App and later to a Smart TV actually fixed this and gave me consistent 1080p content.
nine_k 131 days ago [-]
Do they offer any reasons to these limitations?

In particular, is it due to DRM requirements, or pure performance? I suspect it's the former.

Daiz 131 days ago [-]
It's 100% a DRM thing. Desktops are pretty much the most powerful platform to watch Netflix on yet the most restricted in terms of available video quality due to their open nature.

For example, my desktop with an i7-2600k (that's a Sandy Bridge CPU from 2011) has zero issues playing 4K60 VP9 footage on YouTube in Chrome with CPU decoding, yet on Netflix with the same Chrome I'm arbitrarily restricted to 720p H.264 video.

Crosseye_Jack 131 days ago [-]
It’s DRM. The widevine conf they are using means they are decrypting and decoding in software when you use Chrome or Firefox. When you use Edge you use a different DRM scheme that allows allows decrypting, decoding and rendering in hardware so Netflix offered content upto 4K in Edge with Recent Intel CPU’s. (Last time I checked Ryzen has only just come out with no onboard GPU. But support for recent Nvidia GPU’s was promised, it’s been a while so the landscape may of well changed) If you didn’t have the latest Intel CPU it called back to an older version of PlayReady (sure that’s the brand name of MS’s DRM - on phone and a bit lazy to look it up) that still surported 1080.

See in Widevine there are a number of “levels”, the highest being when it can decode, decrypt and push to the frame buffer all in a secure zone. This can not be achieved (atm, well atleast the time of my research into the matter) with widevine on Desktop, so in such a setup widevine will only decrypt upto 720p content.

When running on Android and ARM this is possible and you can get 1080p, which is why you can get cheap android based tv sticks (even the old Amazon Fire TV sticks) supported 1080p but your gaming rig and Chrome could not.

Don’t work for Widevine, Google, NetFlix or anyone else for that matter. Just a nerd with too much time on my hands so I looking into this stuff. Any corrections welcomed :-D

Crosseye_Jack 131 days ago [-]
No longer able to edit so replying to myself: Taken from another post of mine about widevine 7 months ago where we was discussing why the RaspberryPi couldn't support 1080p Netflix (https://news.ycombinator.com/item?id=15594460 and a link to the comment chain to make it easier for anyone reading - https://news.ycombinator.com/item?id=15586844)

> As far as I understand it there are 3 security levels to widevine Level1 being the highest and 3 being the lowest.

> Level 1 is where the decrypt and decode are all done within a trusted execution environment (As far as I understand it Google work with chipset vendors such as broadcom, qualcomm, etc to implement this) and then sent directly to the screen.

> Level 2 is where widevine decrypts the content within the TEE and passes the content back to the application for decoding which could then be decoded with hardware or software.

> Level 3 (I believe) is where widevine decrypts and decodes the content within the lib itself (it can use a hardware cryptographic engine but the rpi doesn't have one).

> Android/ChromeOS support either Level1 or Level3 depending on the hardware and Chrome on desktops only seems to support Level 3. Kodi is using the browser implementation (at least when kodi is not running on Android) of widevine which seems to only support Level 3 (So decrypt & decode in software) and therefore can not support hardware decoding. But that doesn't mean that hardware decoding of widevine protected content can not be supported on any mobile SoC. Sorry if I gave that impression.

> When a license for media is requested the security level it will be decrypted/decoded with is also sent and the returned license will restrict the widevine CDM to that security level.

> I believe NetFlix only support Level 1 and Level 3, which is why for a while the max resolution you could get watching NetFlix on chrome in a desktop browser was 720p as I believe that was the max resolution NetFlix offered at Level 3 and we had to use Edge/IE(iirc) to watch at 1080p as it used a different DRM system (PlayReady) and why atm Desktop 4k Netflix is only currently supported on Edge using (iirc) Intel gen7+ processors and NVidia Pascal GPUs (I don't know if AMD support PlayReady 3.0 on their GPUs as I don't have one so not really had the desire to investigate, I'm guessing that current Ryzen CPUs do not as they currently don't have integrated GPUs).

kec 131 days ago [-]
I'd wager the 720/1080 split is due to the former being their limit for browsers doing software decode. 4k being restricted to edge sounds like Microsoft is the only one supporting HDCP (drm).
8xde0wcNwpslOw 131 days ago [-]
It could also be a lot worse than 720p. At least for me, most bigger (not netflix-original) titles are limited to 480p with maximum bitrates even below 1000 kbps when using a "partially" supported browser like Chrome.

But even using Edge is not a silver bullet for all content, as some seems to be limited to that low bitrate 480p on all browsers, even if higher quality is available on a TV app.

c2h5oh 131 days ago [-]
Control+Alt+Shift+S and override bitrate used.
8xde0wcNwpslOw 131 days ago [-]
Yes, I'm using that to see what bitrates are available. For the mentioned content, they range from very low to low (up to around 1000 kbps).
DavidVoid 129 days ago [-]
>In Chrome, Firefox and Opera you're getting 720p max.

You can actually get some content in 1080p in Chrome and Firefox with a browser extension. It is somewhat unreliable however and some videos still get capped at 720p.

Chrome extension (with explanation of how it works): https://github.com/truedread/netflix-1080p

Firefox extension (unfortunately doesn't seem to work at the moment): https://github.com/vladikoff/netflix-1080p-firefox

earenndil 131 days ago [-]
Do note that it's possible to watch in 1080p with this addon[1]. 4K it's probably not possible to spoof.

1: https://addons.mozilla.org/en-US/firefox/addon/force-1080p-n...

IshKebab 130 days ago [-]
Wow that is ridiculous. I thought Netflix owned most of their content and wouldn't need to kowtow to ridiculous DRM demands like this? Or is this their own doing?
dingo_bat 131 days ago [-]
Even then, 720p from a torrent looks much better that what I get from Netflix. And I've tried Chrome, edge and even the win10 UWP app.
voltagex_ 131 days ago [-]
What source is your torrent from?

A WEB-DL is different from a (1080i) HDTV capture is different from a Blu Ray rip.

Netflix are optimising for bandwidth over quality - hell, the audio still seems to be 96 kbit AAC.

graedus 131 days ago [-]
Control+Alt+Shift+D in netflix to see information like the resolution of your video stream, bitrate, etc. As sibling comments mention, the max resolution can depend on your browser.

On youtube, you can right click > stats for nerds for some similar info.

mfrw 131 days ago [-]
Thanks for this. [TIL]
SirHound 131 days ago [-]
Could it be your ISP is targeting and throttling Netflix? There's probably a way to verify this.

I tend to find the quality of HD Netflix streams pretty great over my PS4. Certainly never a cause to download a torrent instead.

dingo_bat 131 days ago [-]
> Could it be your ISP is targeting and throttling Netflix? There's probably a way to verify this.

I believe fast.com is supposed to test against actual Netflix video delivery servers, just to detect this kind of ISP fuckery.

dooglius 131 days ago [-]
You can do a pretty direct test with https://www.netflix.com/title/70136810
dingo_bat 131 days ago [-]
That just takes me to the netflix homepage for some reason.
131 days ago [-]
isostatic 131 days ago [-]
On a PC you can apparently choose a higher bitrate [1], but it's still pretty ropey from what I can see. I get 66mbit on fast.com, but look at how poor this still [0] from civil war is. Perhaps it's just because it's firefox on linux [2] - are these values in kbits? I normally watch netflix on my LG tv and it tends to average 10mbit, which is still far less than a blueray, but it's "good enough".

[0] https://i.imgur.com/cC2DJgN.png

[1] https://www.addictivetips.com/web/force-netflix-to-stream-in...

[2] https://i.imgur.com/LFPWR54.png

gempir 131 days ago [-]
Could it be you are using an incompatible Browser? E.g. Chrome on Windows works not very good the native Netflix windows app delivers a way better quality
wink 131 days ago [-]
TIL there's a Windows app, I'll try that out as well, thank you. FWIW I haven't noticed huge differences on Chrome and Firefox on Win10, I usually blamed the wifi for bad quality though.
cpeterso 131 days ago [-]
Chrome and Firefox use the same DRM (Google Widevine), so they receive the same 720p video from Netflix. Edge and Safari get 1080p because the use hardware-based DRM (Microsoft PlayReady and Apple FairPlay, respectively).
lightbyte 131 days ago [-]
Isn't that because Chrome refuses to add the DRM netflix needs for 1080p content? (I believe it was silverlight)
haikuginger 131 days ago [-]
I think it's much more arbitrary. Silverlight is on life support with support for EME/HTML5 video on most platforms, but Netflix has historically chosen to only support 1080p video on the first-party browser for any given OS (Chrome on ChromeOS, Safari on macOS, and IE/Edge on Windows).

EDIT: Looking at some stuff, it seems like Netflix might "trust" a first-party browser to select the highest-quality stream that it has hardware video decode support for. In comparison, it sounds like there are extensions that enable 1080p in Chrome by pushing it into the list of playlist options, but it can cause a serious performance hit by decoding on the CPU.

21 131 days ago [-]
Are you sure you have a 1080p subscription? The basic one, single device, is 720p only. You need the two device subscription for 1080p
kevin_thibedeau 131 days ago [-]
I have the 720p subscription and Netflix will occasionally drop down to a crummy low res bit rate when playing via a 4k Roku. It isn't pleasant.
acchow 130 days ago [-]
You're having an abnormal experience. In my own home and all my friends', YouTube looks terribly blocky (low bitrate video) compared to Netflix.
zellyn 131 days ago [-]
I see this:

  In the ITU-T VCEG and ISO/IEC MPEG standardization world, the
  Joint Video Experts Team (JVET) was formed in October 2017 to
  develop a new video standard that has capabilities beyond
  HEVC. The recently-concluded Call for Proposals attracted an
  impressive number of 32 institutions from industry and
  academia, with a combined 22 submissions. The new standard,
  which will be called Versatile Video Coding (VVC), is expected
  to be finalized by October 2020.
Can anyone with more industry knowledge chime in here? To me, that sounds a lot like the kind of group that created the patent- and royalty-encumbered stuff the AOM was created to avoid.
derf_ 131 days ago [-]
I have some industry knowledge. You are exactly correct.
simonbh 131 days ago [-]
At the bottom of page 3, Patent and Copyright is covered. I am not a lawyer and it is not clear to me how this mixing of code from different groups will work from a licensing standpoint. https://www.itu.int/en/ITU-T/studygroups/2017-2020/16/Docume...
ksec 131 days ago [-]
Well, they are the same group, or same organisation that created the group for HEVC.
ksec 131 days ago [-]
>To address the disconnect between researchers in academia and standardization and the industry users of video coding technology,

This annoys me quite a bit. Because then it list out the so called large-scale video encoding from Facebook, Twitter, and Youtube. As if Video Encoding are only done by OTT providers. And the Video Encode from iPhone ( Consumers ) TV broadcast ( the good old content distributor ), livestream of events from Sports to Olympics.. etc doesn't matter. It is a very Silicon Valley mentality, and shown in AOM / AV1. After all they are creating their own codec for their own use. While other industry codec organisation will have to take care many "edge" use case.

>So how will we get to deliver HD quality Stranger Things at 100 kbps for the mobile user in rural Philippines? How will we stream a perfectly crisp 4K-HDR-WCG episode of Chef’s Table without requiring a 25 Mbps broadband connection?

It is interesting this 100kbps bitrate and rural Philippines comes up. Because this is the exact same quote from Amazon's video specialist Ben Waggoner mentioned on Doom9.

Shouldn't we be a little more realistic with the bitrate? We have 20 years of experience and research and yet we still don't have a single Audio codec ( two dimensions ) that could perform better then MP3 128Kbps at half the bitrate. Opus only manage to slightly edge it out at 96kbps, and that is with selected samples. There is only so far we can go, 100Kbps is barely enough for Audio. And we have Massive MIMO, and 5G, both will bring immense capacity increase to current Network. There is so much in the pipeline to further increase efficiency, capacity, lower latency, cost, and power. It is a little hard to think designing a 100Kbps for Video.

Currently Youtube streams 1080p AVC @ ~2.2Mbps. Which seems to be fine with most people already, especially on Computer / Tablet or Smartphone Screen Size. HEVC can probably do similar quality with 1.5Mbps. VVC should be aiming at below 1Mbps. Netflix is doing 15Mbps 4K Streaming with HEVC ( And people are complaining about quality already, I have no idea why I don't watch Netflix) VVC should really be aiming at better quality with 8Mbps. We should aim we specific bitrate and resolution with Real World Encoder as anchor, and specific quality to achieve.

hcnews 131 days ago [-]
In most of your reply you have failed to account for the third world (and frankly a decent chunk of first world). It seems to me you don't have an understanding of your user base. Please dig into how many people are actually on HD-capable bandwidths.

I am glad that Silicon Valley is driving it since they have users all over the world, not just fiber-enabled users. If and when other people have a better aim, vision and understanding of the problem, I think you should find a good collaborative atmosphere.

In the meantime, let's not design for 100m users when we should be thinking about 2-3b users.

kolpa 131 days ago [-]
You aren't disagreeing with parent, who said that the 2-3B users don't have HD-capable connections. The parent is saying that the industry should try delivering SD-quality video before hand-waving about near-infinitely compressible HD-video.

For example, one solution for low-bandwidth environments is to edge-cache it (farther toward the real edge than "local ISP") and then spread it locally via peer-to-peer short-distance radio (wifi, bluetooth, local cellular). This is not a 100x data compression challenge; it's a local delivery infrastructure challenge.

ksec 131 days ago [-]
>Please dig into how many people are actually on HD-capable bandwidths.

I did mention Youtube are doing 1080P at 2.2Mbps. We could be doing 1.2Mbps with HEVC today already. That is HD-capacble bandwidths. And 720P is also HD.

>In the meantime, let's not design for 100m users when we should be thinking about 2-3b users.

There are roughly 5 Billion Phones users world wide, 3.3Billion of users with Smartphone. 1.3 Billion user on LTE, with most of the other 2.3 Billion user do have LTE capable Smartphone but not on LTE plan or their country are on its way to LTE. There are still 600M user in China that does not have Smartphone but could have had LTE should they choose to. India is skipping 3G and moving on to 4G. With 300M Smartphone user and adding even more as we speak. We are expecting to have 2B+ users on 4G or 5G by 2020. With the majority other 1.4B user chooses not to be on LTE rather then inaccessible. And in developing part of the time world, there are little incentive to keep the 3G equipment and spectrum.

I pointed out the we have lots of Network improvement in the pipeline. Purely in terms of technical achievement, arguably speaking Massive MIMO is the biggest invention since wireless communication itself. We will see huge increase in capacity, available for everyone and cheaply. It isn't about first world or third world. US used to have the worst Telecom services, even compared to some third world countries. But Smartphone has changed that because users are willing to pay to get better services. They moves up the ladder, the impact were much more profound in those developing countries. As pointed out by Benedict Evans, there are lot of places where charging the phones cost more that the Data Plan.

I am not saying we shouldn't care about the rest. But a new video codec focussing on 100kbps is wasting energy actually for the few rather then many. By the time this video codec that is done with research and implementation, and hardware available to mobile user, we are talking about a cycle at best 3-4 years, or more likely 6 years+. Network bandwidth would have improved more so.

Another point worth mentioning, the post about Netflix 270Kbps encoding. x264 and x265, were never great at encoding low bitrate. rmvb / Rv10, or Rv9 EHQ from Real Media ( God I am old.... ) has always had better perceived quality at those bitrate. It isn't that we need a new codec aiming at low bitrate, it is that the current encoder are hardly optimise for those bitrate. There is a lot could be done in decoder filter, pre video cleaning to achieve better perceived quality at those bitrate. And RMVB used to do a very good job at it. The new RV11, which is based on HEVC, may have applied those tools as well. ( Not tested RV11 so I am not sure )

MayeulC 131 days ago [-]
Are there transmission schemes that can transfer multiple independent streams that add up for better quality?

It could be streams containing information about smaller subblocs, etc.

If you take a fourrier transform, the more coefficients you include, the more faithful the reproduction is.

Splitting the quality level into multiple independent streams could have multiple advantages :

- better use of Multicast, as viewers for different quality settings get the same data, so you use approximately the bandwidth for one full quality video instead of full quality + medium + low...

- save on storage space for the same reason - save on transcoding time, as only one pass is needed

- better suited for distributed storage/transmission, on a platform like PeerTube, where every p2p client could contribute back instead of clustering by quality.

I am not working in this field, but I know a fair bit about compression, and this seems a no-brainer to me. Is it already done? Where? Or did I oversaw some issues?

0x09 131 days ago [-]
This is known as scalable coding and has been included as an extension or profile in at least every ISO and ITU standard since MPEG-2, see for example https://en.wikipedia.org/wiki/Scalable_Video_Coding (this article seems to be specific to H.264)
MayeulC 129 days ago [-]
Thank you a lot for the proper technical term. Do you happen to know if it is widely used in web-based streaming applications (I would be surprised, as I don't think I encountered it before), if ffmpeg supports it (in the case of H.264),and if AV1 includes such an extension?
lopmotr 131 days ago [-]
"new techniques are evaluated against the state-of-the-art codec, for which the coding tools have been refined from decades of investment. It is then easy to drop the new technology as “not at-par.”

This seems to be quite a general technology problem that looks like it applies to car engines, batteries, computer memory and any other highly optimized mature technology. I wonder if there's some way to change incentives so others get a chance. It happens in evolution in nature too and it's sometimes solved by mass extinctions. Hopefully that's not the only way.

pishpash 131 days ago [-]
That's what research programs were supposed to do. The problem occurs when research is also evaluated against such specific things as wanted by industry that there's no room for new ideas to be fully funded for the long maturing process.
vidanay 131 days ago [-]
I want to see an encoding that is basically RLE using offsets and lengths of π.
21 131 days ago [-]
vidanay 131 days ago [-]
I don't see any references to pi in that.
shmerl 131 days ago [-]
A post about innovation and moving beyond block-based encoding and no mention of Daala's idea of lapped transforms?
cjensen 131 days ago [-]
There is no shortage of new ideas for video encoding. The post is welcoming both new ideas and encouraging more new ideas rather than being any kind of summary of current tech.
shmerl 131 days ago [-]
I guess there might be no shortage in general, but there are few which are publicly developed with the aim to make free codecs. Which other efforts are there besides Daala?
ksec 131 days ago [-]
All the wavelet based codec shown good results in lab or test, but in practice they are extremely hard to tune and the engineering hours that needs to put in before it being variable is far beyond any one single company are willing to invest.

I think Daala will likely live in AV2. But let wait for AV1 to go out first before they work on that. As the promised bit stream frozen still hasn't happened.

kylophone 131 days ago [-]
shmerl 131 days ago [-]
It's still block based. I was talking about new approaches which the blog post is referencing.