The rest of the article made reasonable points but this is a really bad argument.
The cited claim that congestion control was application limited 50% of the time used bandwidth of either 10Mbps or 50Mbps. Meanwhile, the median mobile device is on a low-end (~15th percentile) 3g connection, I would guess around 1Mbps or less; 45% of devices were on 2g in 2017 according to the author's own citation.
I'm writing this from Senegal, currently on great internet with a 350ms RTT; mobile RTT often hits 1s. At those speeds I really couldn't care less how much time my phone spends decrypting if you can save me roundtrips!
But on the issue of who it's for, it it helps the less connected be much more connected than it does get hard to argue against it, unless it leads to a loss of decentralization or other prime principle.
While they decrease some overhead, they increase the surface for single bit errors, require larger buffers for sending/receiving retransmission buffers, and hurt bad connections while only minimally improving good ones.
I don't know the full protocol, but if you can decrease round trips you improve latency while also taking a ton of load off of all intermediate systems, lower RAM used for buffering, etc. "Buffer bloat" has been an issue for a long time.
As for the pros/cons, I'm only suggesting increasing maximum packet size and using whatever size works for a given connection. I suppose that adds some complexity to negotiation and operating modes. Cost for memory buffers is a good explanation.