NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Speed Is the Killer Feature (bdickason.com)
atleta 1143 days ago [-]
He's kind of totally wrong about the phones and thus the speed being THE killer feature. First of all, Symbian phones, which were the market leader smartphones when the iphone was released were pretty fast. So were feature phones (i.e. dumb phones).

What iphone was a LOT better at than everyone else was UX. Of which speed is one component, of course. It's funny how much people never get it although it happened in front of us, it happened to us. At the time I was working at Nokia Research and I remember my girlfriend telling me how his boss got this wonderful phone that you can take photos with and you can view them, etc. The funny thing is that I had such a phone since 2001. I have been working with smartphones for 6 years then, she knew it, she listened to me when I told her or others what I was doing (and then listen to others responding "yeah, but phones are for making phone calls"). She saw me browsing the net on my phones (a 9210 communicator and then a 9500), send emails from the beach, etc.

Still it somehow didn't register. Because it looked like something that she'd never use. And then the iphone that did a lot less made her and basically everyone understand what a smartphone is. (Even though by then symbian smartphones were pretty common, most people didn't use them as smartphones.)

So no, it's not simply the speed. It's the UX. And even if we talk about speed, it's still not the speed, but it's the perception of the speed, which a lot has been written about: delay (lagging) matters a lot even if speed on average is OK.

adrianmonk 1142 days ago [-]
As long as we're offering opinions on the iPhone's killer feature, mine is that it was access to desktop web sites.

Remember WAP[1] and WML[2], the HTTP and HTML substitutes for mobile phones too anemic/limited to support the real thing? Back then, many web sites simply didn't support access from a mobile device. (It's the polar opposite of "mobile first" or "mobile only".) A few did, but many just tossed up an error page.

With the iPhone, Apple put together all the key ingredients to be able to say, if you're on the go and suddenly need to access your bank's web site to check your balance or whatever, you will be able to, even if your bank doesn't support mobile devices. The experience may not be great, but it will at least be possible.

Those key ingredients included a big screen, a fast enough processor and large enough RAM to handle pages that were somewhat bloated, a browser that supported enough (JS, etc.) to make most pages work, and special features for making the most of desktop-oriented pages by zooming in on text. To some extent, Apple brought these key ingredients together by designing it that way, but they also did it by not entering the market until powerful enough hardware was available.

The iPhone flipped mobile web access on its head. Instead of implementing whatever was convenient and punting on 50+% of the web, leaving users at the mercy of web sites to decide if mobile access was worth it to them, Apple created a device and browser that took responsibility for doing anything and everything it could to make sites work.

The web is a killer feature for the internet, and getting meaningful access to the web was a killer feature for internet-connected mobile devices. Paradoxically, it worked so well that the platform was enormously successful and it became essential to offer mobile web support.

---

[1] https://en.wikipedia.org/wiki/Wireless_Application_Protocol [2] https://en.wikipedia.org/wiki/Wireless_Markup_Language

auganov 1142 days ago [-]
Windows Mobile came with a regular browser for years before the iPhone. There was nothing crazy about first iPhone's specs either, pretty in line with the rest of the market. While it wasn't big with regular consumers, WM had a decent smartphone market share at the time (Windows Phone would never come close). Amongst executives, managers and the likes, it was absolutely dominating. When it first came out, many WM users (including Microsoft) considered the iPhone to be a joke. At best, another player in a crowded market. Sure, people liked some features. But they were considered easy-to-replicate gimmicks (and many were indeed quickly copied by enthusiasts). WM was way ahead as a platform. iOS didn't even let you install apps! Apple PCs/laptops were fairly niche at the time too. It wasn't obvious how Apple had a defensible advantage.

Quite frankly, I still tend to think it's as much about Apple knocking it out of the park with the UX (and marketing), as it is about Microsoft doing literally everything wrong in response. WM could have become what Android is today.

atleta 1134 days ago [-]
Nokia smartphones had http+html browsers back in 2001. I don't know how many mobile optimized sites were there, but the browser on my 9210 was pretty usable (though not everything worked, of course). I used to save full pages on my desktop and copy it on the 9210 to read while on the go. (Well, we didn't have Pocket back then, and while I definitely wanted to develop something similar, I just couldn't make myself do it in Symbian's C++ dialect.)
mercer 1139 days ago [-]
I disagree in that internet on the original iPhone was terrible, and everyone I knew who bought one didn't really do much browsing on it. It was the UI/UX. Being able to touch-drag was so different from the indirectness of the buttons on previous phones, or using a pen on a squishy screen (I was the nerd that was previously into Palm and Windows CE(?) devices). I remember that what my friends who had the original iPhone showed off wasn't primarily the browser, but rather the pinch-zoom feature on the photos they took, the drag-bounce effect in apps, and of course just the whole thing of having a big screen with no keyboard or somewhat gaudy keypads.

But I agree also agree with you in that my own first iPhone was the iPhone 3G, and the 3G part and later the app store became a major thing for the device, whether it was browsing the web, or using internet-centered apps (chat, sms, reddit, etc.).

junon 1142 days ago [-]
Agreed. When I didn't need to boot up BREW or whatever it was called to access google at what felt like 1Bps anymore, it was a game changer. This alone was a huge improvement - I had real access to the web all of a sudden, from anywhere.
justinboogaard 1142 days ago [-]
This is a great insight, I’m reading this from my iPhone now!
N1H1L 1142 days ago [-]
I think both you and Brad are right in some way.

The KILLER feature is the total time to do something that the user intends to do.

If you have a very fast OS, but bad UX then the rate-limiting step is the UX, not the OS. And the converse is also true.

Spivak 1142 days ago [-]
Your apps UX is the language your users must learn and speak to convey their intent to software.

Having an expressive vocabulary and complex grammar is great for saying a lot quickly if they’re fluent but painfully slow for anyone who isn’t.

jandrese 1142 days ago [-]
IMHO it was more a problem of common functions being buried 5 menus deep in a sluggish UI.
Pyramus 1143 days ago [-]
Came here to say exactly the same. I would add the capacitative touch screen as another crucial factor that made the iPhone UX so popular.
hinkley 1142 days ago [-]
The capacitive touch and the accelerometer allowed them to make a web browser that could display 'normal' web pages. Up until then everyone had been dicking around with mobile web sites and the lack of ubiquity and cost of doing so... as well as the often hamfisted attempts to assume why you were on the website from mobile... all of these hamstrung mobile browsing adoption.

With this in place commerce could begin on the phone. Once everyone added mobile pay options it could end there as well. An now everyone has one, if they can.

apozem 1142 days ago [-]
My mom had a touch-capable phone with a resistive touchscreen and hated it. Her fingernails were not huge or anything but they were long enough she had to press with the pad of her finger, not the tip, and it crippled her accuracy.
bdickason 1142 days ago [-]
I think you're right. Speed is a component of the User Experience. My point in writing this is that when you abstract to a higher level, the beauty of the UX was that you can instantly do whatever you want. Your thought -> your touch -> action.

However, I think you make a great point that the two are interrelated.

My straw man starting point would be: A poor user experience that is lightning fast can still be a great experience.

But a great user experience that lags or is slow will typically not be successful.

The iphone succeeded because it coupled a great user experience that was so fast that it felt like interacting with objects in the real world.

ulisesrmzroche 1142 days ago [-]
AOL was very successful, even though it was slow and laggy, and calling the UX “meh” is being generous.

The iPhone won because it looked amazing and had the App Store. Looks and features. How did you reach the conclusion that it was speed?

funcDropShadow 1142 days ago [-]
The app store for native apps come around a year after the iPhone.
ulisesrmzroche 1141 days ago [-]
So? That was an early adopter year. For the vast majority of people, the iPhone has always had an App Store.

Actually, that supports my point. The App Store came by because people wanted it. No one said: “oh no bloat my phone will slow down”.

baxtr 1142 days ago [-]
I think you’re talking on a different level of granularity. What was really different? “The UX” sounds generic. Whereas Speed is very specific. Speed is part of the UX.

It would help me at least if you could specify/ list what you think were the things in UX that made it so much better than Symbian phones.

anton96 1142 days ago [-]
One word : Gestures.

We can think of things like slide to unlock but much more importantly scrolling. If something shows more the mastery of the iphone UX of that time it is definitely scrolling, who categories it as gesture now? It's completely normalized, on all other Platform of that time, you had to play with arrows and the scroll bar. Now every platform has it.

baxtr 1142 days ago [-]
Ah very good. Yes, I agree. Simple things like a scroll bar in a browser window right? I remember how difficult it was to tap that small little icon on the lower right corner of the screen. They simple copied mouse behavior onto the phones. And Apple that this out from a 0-level perspective. Thanks for the insight!
asoneth 1141 days ago [-]
Using a touchscreen pre-iPhone usually felt like executing commands on a computer application.

Using an iPhone felt like directly manipulating the underlying content. It was a qualitatively different experience that only superficially resembled previous touchscreen devices insofar as it used similar input hardware.

Note that Apple didn't invent the concept of a responsive UI thread with physics-based UI metaphors, for example Jeff Han demonstrated a fairly sophisticated example at TED2006[1] the year before. But to my knowledge the iPhone was the first mass-produced device with a direct manipulation interface.

[1] https://www.ted.com/talks/jeff_han_the_radical_promise_of_th...

atleta 1142 days ago [-]
Maybe. But the author says speed IS the killer feature (for products) so it has to be true at the higher level too. UX is the perception of the user of the product and their, well, experience using the product. If it's generic it's because it really is that generic, because users won't know why exactly they like a product.

But in the case of iOS vs Symbian:

- as others said: capacitive touch screen (this is not an OS issue, but iphone was among the firsts to use it, definitely earlier than Nokia). This is huge. Like the thing that everyone was talking about (around me) when the original iphone came out is how you could swipe to see the pictures. And it wasn't just for paging, it defined how you could interact with the phone (think pinch zooming, and rotation - not sure when these were added).

- the touch screen UI itself. Nokia played around with the touch UI before, but never really liked it. It was expressed several times internally, that touch is just a no go. But no wonder: the resistive touch screen is pretty bad, but also Symbian itself was built on the assumption that all you have is keys while iOS was built with touch UI in mind from the very beginning. (Now, of course touch was added to Symbian, but that's just not the same. Or they didn't put in the effort. Nokia even had an experimental touch phone released to the market in 2003, the 7700[1], but it was mostly ridiculous.)

- the UI just was a lot more polished, looked better, classier, the graphics was better. They had OpenGL and probably a graphics accelerator - nothing like that in Symbian, of course. (It even took the android guys by surprise, I remember reading/hearing in an interview that when they saw a demo or the release, they've realized that they had to redo the UI from scratch. Because before that they had this Blackberry-ish/Symbianish idea, they thought they were competing with that.)

- I'm pretty sure it had a better browser.

And this pretty much defines the experience, the feel a user gets from the phone. It couldn't send or receive MMS-es (some people may have used it then, but most I guess just wanted to have the feature), it couldn't receive 'push' email. I.e. you had to manually refresh your inbox, emails didn't just arrive. It didn't even have apps. Symbian had all these. It has had these for years then. It even had an app store like thing (at least you had to send in your app for verification which would then be signed by Nokia or it couldn't be installed - that was a new thing around 2004-2006, something I think nobody really did before).

[1] https://www.gsmarena.com/nokia_7700-570.php

bdickason 1142 days ago [-]
My follow up question would be: Would a capacitive touch screen with 0.5s latency have made this the killer feature? Or did the capacitive touch screen enable high speed input?

I'd argue the latter, but it could be a question of framing?

atleta 1142 days ago [-]
I don't know. Again: I think the killer feature was the whole UX. Slow response times is definitely disturbing. (All my android phones got into this state sometimes.) You just don't get that feeling, but it couldn't possible happen to iphone because the the UX was the center of the whole product. I'm not an apple fan (never had an iphone, and the early ones kept pissing me off when friends asked for help) but it's obvious that they are obsessed about UX and polishing the UI.

But you are right that the capacitive display itself makes the interaction faster because it's enough to touch while the resistive has to be pressed. So it's probably slower and feels like you have to put in more effort.

mercer 1139 days ago [-]
I'd say based on the number of friends I have who use old or cheap Android phones with terrible latency, it really was the touch screen and UI/UX that played the main role.

Sure, for many, iPhones are still preferred because of the low latency, so it matters, but I suspect the iPhone would've done just as well if the latency was bad. The competition was about finger vs keys/pencil, not latency.

baxtr 1142 days ago [-]
Thanks for the details. I remember the capacitive screens. They were awful! :)
atleta 1142 days ago [-]
Well, capacitive is what we have today. The old ones, that you actually had to push (and not just touch) were resistive :)
baxtr 1142 days ago [-]
I meant those!
mywittyname 1142 days ago [-]
We see this time and time again where technology needs to be introduced multiple times before it gets adopted. The killer app is always use case.

It helps that the iPhone was a iPod with a phone attached, instead of a phone with a multi-use compute device attached.

OG iPods were single purpose music players, and features that made sense were slowly introduced over time (and were optional). Adding support for photo viewing made sense because album art is universal and well, album art is no different than a photo. Adding video made sense because you have this nice color screen for showing the photos/album art, and music videos are a thing people enjoy. Then adding a camera make sense, because you can already view photos/videos. Once you have all that in one package, adding phone capabilities makes a lot of sense when you realize that people are carrying around iPods along with a cellphone.

anton96 1142 days ago [-]
I think it's partly that but it is also a matter of timing.

If we can consider the iPhone to be innovative, we cannot overstate how much timing was important.

The iPhone was a phone with a big screen optimized for the internet. Regarding people who are not into technology,for what I can see, their main interest to go for the smartphone has been whatsapp and free phoning in general. And as time went on, more and more services of all kind including administrative ones where more participial to use on the internet that in real life.

addicted 1142 days ago [-]
The iPhone “did a lot less” than competing phones but the vast majority of users with an iPhone could do a lot more with the iPhone than they could with a competing phone.
sizzle 1142 days ago [-]
I had a Sony Clié PDA which ran .swf flash games and the home screen had a grid of icons much like the iPhone 1 home screen. It was a gorgeous full screen display with touch and stylus. This predated iPhone by a few years.

Any else have a PDA and see the glaring opportunity to add cellular functionality to them?

brnt 1142 days ago [-]
I never saw (or see) the need to combine them. Early 2000s I figured in-ear phones (headset minus the phone) would be a thing Real Soon(tm), and then a PDA would be all I need to be productive. I still miss Palm and the apps available on it. Naturally, there is benefit to a connected PDA, but cellular? I wouldn't miss it, if my phone was just my headset.
johnwalkr 1142 days ago [-]
It was actually kind of slow at the time, especially it’s network connection. I agree the UX was a game changer. Big screen and responsive touchscreen made it a joy to use. For me, google maps in my pocket was the killer feature, and it worked well even without GPS.
pontifier 1143 days ago [-]
I vividly remember using a kiosk to order a sandwich at a gas station 3 years ago... Not because the sandwich was great, not because it had a great logo, or a great name...

The INSTANT I hit the button to complete the order, the built in printer almost spat the ticket at me. I ordered a second sandwich just so I could get a video of that happening again.

Edit: Just found and uploaded the video :) https://youtu.be/TX_-dXIpPvA

Edit2: looks like it was a soda, not a second sandwich.

culopatin 1143 days ago [-]
I wish I had that at work. We have self serve kiosks in the cafeteria and my muscle memory has made me faster than the display. I pretty much operate it in a constant loading-icon state now. The part I actually wait for is the 2 seconds for the printer.
grishka 1143 days ago [-]
And then over here we have a fast food chain whose kiosks are just laughably slow. Scrolling stutters, animations are laggy, and taps take what feels like a second to register. Burgers are okay tho.
layoutIfNeeded 1143 days ago [-]
Holy shit! That's like the Rolls-Royce of kiosks. I wish car infotainment systems had this level of responsiveness...
bdickason 1142 days ago [-]
This is amazing!! In a world full of 'please wait for your receipt' and dot matrix noises... this feels totally magical.

Thanks for sharing this, it's crazy how much super fast experiences still surprise us.

billti 1142 days ago [-]
Interestingly... too fast can be a problem too. A prime example is this site, Hacker News, on a really fast browser. I'll often sit there waiting for a navigation to load, until I realize it had already loaded (to a similar looking page), just so fast I didn't notice the transition.
kaba0 1142 days ago [-]
That’s why animations are important contrary to what many say here in the comments (and I agree that putting them here and there just for the sake of it is bad). They give “life” to a virtual object. For example I really like tiling window managers, but they could use a really fast animation for window switches because the changing number at the top of the screen doesn’t say as much as movement to my primate brain.
dljsjr 1143 days ago [-]
When I saw NJ in the video description I knew it was a Wawa before I even hit play. Those kiosks are their bread and butter (no pun intended).
domano 1143 days ago [-]
Wow, even the printer is fast in itself.
throwaway81523 1142 days ago [-]
That's a very normal Seiko (or similar) receipt printer. I spent a lot of time programming them. Humorously, the programming manual is marked "confidential" (I guess to make it hard for anyone to make compatible printers), but there are copies of it all over the web, and there are plenty of compatible printers ;).

The POS app that I worked on (not related to the one shown in the video) also went to pretty serious lengths to get rid of the pause between the user pressing "enter" and the receipt coming out. The store operators rightfully insisted on this, because they wanted to keep the checkout lines moving as fast as possible.

I liked those printers and remember wanting one for myself even though I had no use for it. They start at around $200 and take up space, so I managed to resist.

1143 days ago [-]
castlecrasher2 1143 days ago [-]
That's amazing, thanks for sharing.
rmorey 1143 days ago [-]
as a frequent patron, I read your comment and KNEW it had to be Wawa!
mritchie712 1143 days ago [-]
this is to be expected, wawa exudes excellence in all it does.
jcims 1143 days ago [-]
Probably because there is a financial incentive for speed. :)
therealdrag0 1142 days ago [-]
And yet so many are slow as mud
skinkestek 1143 days ago [-]
Wow, someone in HR at work should find that dev and get him/her to work for us :-)
2almalki 1143 days ago [-]
I would also say the Costco self-serve food kiosk order system are fast as well
lotsofpulp 1142 days ago [-]
I don't know how Costco does it, but if you use tap to pay or insert your card into the chip reader while the cashier is scanning your items, at the end when the cashier presses the button to indicate they are finished scanning, it instantly says "approved" and prints a receipt.

Costco is the only place where I've seen this. I don't understand how it gets an authorization for any amount that fast, since it can't know the total while the cashier is still scanning items, and it's Costco, so it could be anywhere from $50 to $5,000 so surely it's getting the authorization after the transaction finishes? The flow is almost perfect. I have them scan my Costco membership, I use tap to pay on the card reader, then I or 2nd cashier move to organizing the items into the cart, and then the cashier hands me a receipt with basically zero wasted time.

jedberg 1142 days ago [-]
Costco has an advantage over everyone else -- they already know who you are before your purchase is complete. By scanning your membership card, they already have your average purchase profile.

They actually ask the credit card processor to approve you for $avg + X%, so as long as your purchase comes in lower than that, you've already been approved. If you make a really big purchase it will take a little longer, because they go back for a second auth for the bigger amount.

It's also why you'll see some people making $700+ purchases without having to sign anything -- because Costco already knows they do that every week and pay the bill on time so they assume part of the risk.

lotsofpulp 1142 days ago [-]
Interesting, I didn’t think Citibank and Costco would have made that kind of arrangement.
jedberg 1142 days ago [-]
Citibank bent over backwards to get Costco. A few years ago Amex was the only card you could use at Costco. Citibank agreed to not only make a new card with better rewards to compete with Amex, but they agreed to honor all the reward points too. They just rolled over from Amex.

I'm sure Costco got a deal that gives them nearly at cost processing and a bunch of other stuff.

It was worth it for Citi too. The moment I got my Costco Amex replaced with a Costco Citi, the Citi became my primary card, because everyone takes Visa.

bombcar 1142 days ago [-]
Walmart sometimes gives me the "approved remove card" notification BEFORE the cashier has finished ringing up the purchase; I assume they've made a deal that lets them do that.
Solocomplex 1143 days ago [-]
They do have a very small menu though.
percentcer 1142 days ago [-]
Damn I miss Wawa
jonplackett 1143 days ago [-]
This I think is a key reason Netflix is a default ‘channel’ in my mind, whereas Apple TV, amazon prime and Disney plus are all just apps.

Netflix is faster in every way. There’s a button on my TV specifically to launch it, the videos start faster, fast forwarding is faster, there’s less buffering in general. Every single touch point is fast. And it’s because they put the effort in where the others didn’t.

Cthulhu_ 1143 days ago [-]
Definitely. I don't know if developers for e.g. TV apps get much choice in the matter, but it's like native vs webapps. The Amazon app feels like a webapp, while Netflix like a native app (this is on LG's WebOS).

And I know Apple is a weird one there. On the Apple TV, they offer pretty much a version of iOS. There's multiple options to build your UI, but iirc you can build it native if you want to.

And this has been Apple's differentiator; they were FAST. The code for apps compiled down to native, as opposed to a lot of Java based phones at the time (and later with Android).

I've always maintained that Apple had a 5 year head start on Android when it comes to performance (as well as UX, even in their skeuomorphic designs), and after 5 years it was mainly Android smartphone companies focusing on more performance than the Android OS or apps becoming faster. It was Android phones that went for quadcore (and beyond) processors first, while Apple was just fine with a single core, and later, almost reluctantly, a dualcore. Simply because their earlier technology choices made their stuff so much faster and more efficient.

I'm so glad Apple didn't go ahead and make web technology the main development path, as they initially planned (or so I gathered).

jonplackett 1143 days ago [-]
Yeah it definitely feels that way. I reckon it has a lot to do with the servers too though. Even netflix.com is far superior to prime video / apple tv+ browser versions. In fact it feels virtually identical to the app version.
colde 1143 days ago [-]
For Netflix it has a lot to do with how they integrate with the TV's. They tend to integrate directly with the chipset vendor, and then ship their own SDK that the TV vendors integrate. Everyone else is relegated to use the terrible shitty webapps like development with no debugging capability. So for Smart TV's at least, Netflix is on a whole different level than everyone else.
tumblewit 1143 days ago [-]
Netflix is so much superior to Prime. Prime has a hard time maintaining 1080p but Netflix has such varying bitrates from as low as 1mbps to as high as 20mbps while watching The crown. And the best part is how snappy the app itself is and instantly starts playing anything. Apple and Prime have a lot of work to do. Prime is possibly the worst streaming platform currently. Though their own fire stick is superior in every way compared to iOS apps or the web app.
darkteflon 1143 days ago [-]
I’m in Japan, with an English-language Amazon account, yet Prime insists on displaying Japanese subtitles on absolutely every thing I watch. Doesn’t matter what the original language of the content is, doesn’t provide an option to turn it off. Huge, bright white subtitles - much bigger than what Netflix uses. Been this way for years.

Sometimes I have fantasies about sending an email direct to Jeff Bezos just to say: dude, did you know about this?

Suffice to say, I don’t watch much Prime.

jonplackett 1143 days ago [-]
It's a lot of stuff like this that adds up. Prime and Disney + also seem to completely forget what I'm currently watching all the time - and even if they remember, it will restart a minute or earlier than I left off. Netflix is always bang on the second. Netflix also always has a 'skip recap' and 'skip intro' button. These things don't happen by chance. Someone worked hard for that!
danielscrubs 1143 days ago [-]
Seems like all streaming providers have issues with global licensing. Why can’t I pick from all the languages that the provider has available? Why lock it per country Netflix?
Guillaume86 1143 days ago [-]
In Belgium we have a 60/40 language split between Dutch and French. Amazon Prime insists to promote to me dutch things (with a special section in the home screen), while I live in the french speaking part. No such issue with Netflix of course.
johnwalkr 1143 days ago [-]
They are not only displayed, they are hard coded and there is usually 2 separate videos (dubbed and subbed) for non-Japanese content. That being said, new content is starting to have multiple audio and subtitle tracks (like Netflix).
ALittleLight 1143 days ago [-]
May be dumb to say, but have you checked to confirm you don't have subtitles enabled? If you bring up the player controls you notice a little "cc" in the lower right hand corner (at least in the English version as shown on my TV). If you click on the cc you can configure the closed captions, turning them off or on, changing language, or changing color.
formerly_proven 1143 days ago [-]
Prime has burnt in subtitles for approximately 90 % of the content outside the US. Which actually might have something to do with their garbage-tier video quality: unnecessary recodes and burnt in subtitles = multiple copies, so more incentive to use bit rates straight from the shitter.
jonplackett 1143 days ago [-]
The terrible quality and slowness I think is a remnant from them buying LoveFilm and rebranding it as Prime video all those years back, which was based on that awful Microsoft DRM that I can't remember the name of, was it Silverlight?
hef19898 1143 days ago [-]
It was silverlight. Might still be installed on my desktop.
johnwalkr 1143 days ago [-]
They are hard coded on Japan prime video (mostly).
jonplackett 1143 days ago [-]
Netflix also did a bunch of deals with ISPs and essentially have mirrors of their catalogue in all the right places, and use machine learning to guess when different shows will be watched and shuffle around what they have in those caches.

Meanwhile Apple TV+ will just go ahead and try to use a super heavy 4k stream on my iPhone over 4G - won't even let me download it (at least this was the case a year ago, the last time I was out of wifi range).

AdmiralGinge 1143 days ago [-]
There is one very niche area that Prime is quite good and it's their VR app. I have Prime on my Oculus Quest and they've really nailed the player itself, it's like being in an actual cinema. Netflix also have a very good VR app, but none of the other streaming platforms I use have put in any effort in this regard.
pradn 1142 days ago [-]
Prime has 4k UHD for free, while Netflix doesn't. For me, that's quite an advantage.

Also, given the choice, I'll rent a movie on Amazon because they even give refunds if they detect the quality was low.

wdb 1143 days ago [-]
Personally, I prefer Disney+ over Netflix/Prime as my experience is that Disney+ comes with more subtitles. Most of the time I can get Dutch subtitles while with Netflix UK that's not the case.
domano 1143 days ago [-]
For me in germany prime always has instant, rock solid 4k.

Netflix also never stutters, but it starts with like 480p and gets better over time.

Nevertheless the Netflix UI is far superior.

patentatt 1143 days ago [-]
The HBO app on my (admittedly low end and not brand new) Roku is laughable. You have to select a program to watch, put down the remote and go grab a snack or something because it takes about a full minute to load the screen with the show details. All that to load some thumbnails. And if you dare accidentally hit a button before it’s fully loaded, it will crash the app half the time, and the whole OS about 1/10 times. Instead of replacing it with something more performant I use it as motivation to not watch TV and go exercise or read a book instead.
blowfish721 1143 days ago [-]
Agree 100%, one of the main reasons I’m resubscribing to Netflix. Just wish everyone wasn’t pulling their contents to their own streaming services. Have only tried Netflix, AppleTV+ and Disney+ and have to say that Disney+ is worst of them all with sluggish UI, webpage constantly crashing the Safari tab it’s running in with the ”using too much memory” error. And to add to it that even if you pick english as your language it still serves animated with movis with the actual signs/text in them in the localized language.
jonplackett 1143 days ago [-]
You never used Disney Life, the decrepit predecessor to Disney+. Now that was a shoddy app. It used to forget your password every time it did an update (which was very often), it crashed all the time and didn't even have very much content. But I still subscribed because... kids. This is how they knew Disney+ would be a goer, they already had a lot of poor parents sending them nearly as much money for a TERRIBLE app with relatively little content.
leokennis 1143 days ago [-]
Netflix is faster, but Netflix also "always works". If I click on a button, I never get a timeout, it never does nothing.

The amount of engineering work going into that must be amazing.

jiggawatts 1142 days ago [-]
My experience with Netflix is the opposite. When you press play after pausing a show, there is an eternity during which it has a dark overlay blocking your view of what you're trying to watch.

Why do content streaming platforms assume that you want to watch everything other than the content?!

YouTube does this too on my TV, and it's infuriating. Not only does it hide half the screen for a long time after the content starts playing, it then helpfully hides most of the screen before the end of the content also!

This is the computer equivalent of someone shoving their hand in your face to block your vision.

It's rude when a human does it. It's rude when a computer does it.

pronoiac 1143 days ago [-]
This surprised me, because on my iPad it takes over 15 seconds from launching the app to choosing who's watching.
tonyedgecombe 1143 days ago [-]
Apple TV and Apple Music is particularly bad for this, I wouldn't be surprised if the apps are just showing a web view rather than native controls.
bottled_poe 1143 days ago [-]
I find this argument completely ridiculous. As if content is less important than UX. Give me a break.
jonplackett 1143 days ago [-]
Don't get me wrong, I still subscribe to all these services, because of the content.

But I'll always check Netflix for something to watch first because it's faster and easier (unless there's something specific I know i want).

Being the default first choice is very valuable, and speed is the reason they're it.

herunan 1143 days ago [-]
Content is more important than UX, but bad UX is a deterrent from enjoying the experience of finding or discovering what you want to watch.
pdimitar 1143 days ago [-]
This might be because I am a former semi-pro Quake3 player but these days I grind my teeth with 95% of all software. Everything feels like it has at least 200ms delay injected, on every transition.

I'd honestly pay extra for an iPhone where I can disable ALL motion, too. But that's the least of the problems.

I don't want to become the grumpy old grandpa yelling "back in my day!..." but we have waaaaaaaaaaay too much young JS devs who know nothing else.

We need to get back to native UIs. Is it awfully hard? Yes it is. Do the users care? No, they don't. Many people want fast UIs.

But to be fair to all sides -- there are also a lot of people who don't notice certain slowdowns that I scoff at. So there is a middle ground that won't cost billions to achieve and IMO that should be chased after.

rayiner 1143 days ago [-]
It’s awful. Everything web-based is slower on my 4.5 ghz MacBook Pro than things were on my 300 MHz PII running Windows 98. Every web page causes MacOS to complain “Safari is using a lot of power.” I was hunting for a project management web app, and one ate up 10% of the CPU just sitting there doing nothing. This has gotten particularly bad with Microsoft, with Word and Outlook on the Mac, which just kill battery life. (I think they’re using more and more JS under the hood, and I hear Outlook is slated to be replaced with a web app.) Teams is a bloated pig.

The crazy thing is that all these web apps also do a fraction of the things that the native apps used to do. They’ve somehow managed to strip down all the features while making the apps slow and bloated. Watching Microsoft’s To-Do blog is comitragic. Elon Musk will be living on Mars before the Microsoft tools allow you to schedule todos by dragging them to the calendar like Outlook has done since what, 98? (You can drag a todo from the web sidebar to the calendar now—but it somehow doesn’t actually schedule the due or start date in the todo itself or even have any link back to the todo.) And I feel like that’s one thing that’s different now. I also complained that Word 97 was a slow bloated big compared to Word Perfect, etc. But back in the day there was feature bloat. Now, everything is both slow and and non-functional.

I have to assume that it’s a structural thing with the industry. Machine learning, big data, security, etc., has become the hot areas, so all the “A” teams have migrated over there. I hear Apple is having trouble even getting people to do kernel work on MacOS.

mattgreenrocks 1143 days ago [-]
I'm convinced my retirement gig will be writing nice, native apps for my platform of choice.

They won't bring in a ton of cash, but I can continue to make beautiful apps that are fast, focused, and respect the user's time and computing resources.

dceddia 1142 days ago [-]
I just made one of these! I learned Swift to build it. Fast, focused, uses as little memory and CPU as I can manage for a (lightweight) video editor.

It's been fun to work a bit closer to the metal than I've been with JS for the last few years. Made about 50 sales so far. Can't imagine it'll make me rich but maaan it makes my video editing way faster :D

funcDropShadow 1142 days ago [-]
Your app seams great from what you have on its webpage. But the webpage made my AMD Threadripper based tower spin up the fan like hell broke loose. Closing the tab in Firefox immediately stopped the noise.
mattgreenrocks 1142 days ago [-]
Great work on the product and marketing copy there!
dceddia 1142 days ago [-]
Thanks!
smallstepforman 1142 days ago [-]
Thats why I designed a Haiku native video editor with over 30 effects that does 4K UHD video, 3D extruded fonts, GLSL pluggins, and the package is 1.2Mb in size (Medo for Haiku OS)
bdickason 1142 days ago [-]
Things is a great example here. Lightning fast, lets me quickly add or re-order todo items, and does nothing else.
waynesonfire 1143 days ago [-]
Which GUI framework will you use?
mattgreenrocks 1143 days ago [-]
If I had to pick right now, I'd choose macOS for a platform.

For tech, I'd consider both Cocoa + Swift and SwiftUI as candidates for UI components, on a case-by-case basis. Swift is not my favorite language (feels like I have to use Xcode; have yet to try out the JetBrains IDE), but it gets the results I want. Perhaps in the future, we can use Rust in a more ergonomic fashion to talk with native UIs.

Honestly, I'd love an ObjC-like language that interops with ObjC and has strong static typing with a dynamic typing escape hatch for metaprogramming.

brobdingnagians 1143 days ago [-]
The JetBrains IDE for it (AppCode) is pretty nice, but you have to use Xcode for storyboards and UI design; other than that, light years ahead of the Xcode experience.
apples_oranges 1143 days ago [-]
IDK, AppCode always seemed so resource hungry.. but yeah it's worth a try I suppose. I believe the Xcode experience isn't too bad however.
swiftcoder 1142 days ago [-]
Using a bloated non-native app to develop your elegant, fully native app. Uh huh.
Siira 1142 days ago [-]
Java is fast, unlike JS. Perhaps one day JS will be fast, too.
mattgreenrocks 1143 days ago [-]
Good to know, I'll give it a shot!
MaxBarraclough 1143 days ago [-]
My uninvited suggestion: take a look at the FOX Toolkit. A truly lightweight non-themeable GUI toolkit written in C++, for Windows and Unix/X11. It's actively updated, but it's essentially a one man operation these days.

http://fox-toolkit.org/

pindab0ter 1142 days ago [-]
The first screenshot they show you (on the screenshots page) is a Windows XP program. I can't say that inspires much confidence. Am I wrong?
MaxBarraclough 1142 days ago [-]
I can confirm it compiles with the latest Visual Studio and runs fine on Windows 10 in both 32-bit and 64-bit. (Well it did last time I checked, haven't tried the very latest release.) You're right the screenshots are ancient, but the code itself is still being updated by the project's maintainer Jeroen.

The FOX codebase isn't terribly modern, as it's older than the standard C++ concurrency machinery, but it works.

lelanthran 1142 days ago [-]
It does look dated, but I use it daily (I use the xfe file manager) and it is bloody quick - every action is almost instantaneous compare to the KDE, gnome, mate or cinnamon file managers.

It depends on the target market for your application I suppose - if your target won't be happy unless they have html/CSS or similar animations, then using something with low latency isn't going to make them happy.

MaxBarraclough 1142 days ago [-]
> It does look dated

Personally I don't mind the Windows 98 look, it strikes me as clean and no-nonsense. Everything is clear and high-contrast. Unlike with many 'flat' themes, it's generally clear what's clickable. I realise not everyone likes the Windows 98 look though.

If someone is serious about developing fast GUI apps, trading off on themeability is the kind of thing they should consider. As you say, FOX really is fast. I presume this is because of its uncompromising hard-coded native-code-only approach - it's just a C++ codebase. All the drawing operations are implemented directly in C++. Unlike Qt, there's no JavaScript. Unlike JavaFX, there's no CSS. It's all just C++.

Perhaps a GUI toolkit could add themeability without any performance impact by implementing it as a compile-time abstraction.

> depends on the target market for your application I suppose - if your target won't be happy unless they have html/CSS or similar animations, then using something with low latency isn't going to make them happy

Right, but mattgreenrocks said fast, focused, and respect the user's time and computing resources, presumably in contrast to current norms.

abakker 1143 days ago [-]
OUTLOOK! Jeez has it gotten slow on my mac. I am not a particularly fast typist, but I can routinely out-type outlook by a whole sentence. Moreover in the latest version, if I hit command-R and start typing it will routinely take so long to just start replying to a message that it will drop the first 90 characters I type. I've seen rumors that microsoft will replace it, and I cannot wait until that happens.
pklausler 1142 days ago [-]
Outlook as a native application on Windows 10 on a recent Dell laptop is so slow that I have deleted the wrong e-mail in my inbox because I'll hit the trashcan icon and by the time Outlook notices, it's added new messages, moved things around, and then think that I clicked the icon on the message that now appears where the original did.
rayiner 1142 days ago [-]
This is a major problem with Outlook now. I’ve done it several times, where Outlook is thinking and moving stuff around between when I target the thing I want to hit and when I move the mouse.
twobitshifter 1143 days ago [-]
I remember Outlook on windows 10 actually adding animation to my typing to smooth out the flow of words. I disabled that immediately and I’m usually pro eye candy, but that was a step too far.
metalliqaz 1143 days ago [-]
same. it was the dumbest feature i've ever seen.
cyral 1143 days ago [-]
Glad I'm not the only one experiencing this. I have a brand new i7 Mac and outlook is laggy just switching between emails or inboxes.

Also, if I click the "Switch to New Outlook" button, it says that it can't copy over my custom IMAP accounts for work. I would think that supporting things besides exchange or gmail accounts would be something they would do before releasing a new version.

slaymaker1907 1143 days ago [-]
Weirdly enough it seems like Outlook on the web is somehow faster than the Windows version. It might be because lots of email uses HTML and Outlook is using an ancient version of HTML. I am very impressed with developers who can make things consistent in Outlook as well as actual browsers.
cambalache 1142 days ago [-]
Outlook web is slow as molasses. In the desktop is literally unusable for me (It never opens my account). Both things were superior experiences in 1999.
WorldMaker 1142 days ago [-]
Outlook on the web seems to be getting most of the development effort, in part because supposedly its parts are increasingly shared with Windows Mail/Calendar (aka "Mobile Outlook") through supposedly React Native, but also in part because apparently that's just where most users use Outlook in 2021 (even in many MS365 shops, supposedly, there are a bunch of companies that prefer the web app).

There have been a bunch of interesting rumors that Microsoft is planning to hollow out the insides of Outlook Desktop (anything that isn't nailed down to big corporate contracts and their extensions), and directly replace those guts with Web Outlook via React Native or something like it.

abakker 1142 days ago [-]
I think at this point they could hollow out Outlook and replace it with a guy who draws the interface on a whiteboard and then sends me a photo of it. That might have similar round-trip latency. /s

Really, a web app wrapped in a desktop app would be fine if it could perform better. I don't even need good, just better.

silly-silly 1141 days ago [-]
Its really quite funny, as outlook used to be the 'killer feature' for an operating system, now it just makes people want to be a killer.
1142 days ago [-]
deburo 1143 days ago [-]
This is a specific nickpick but you won’t make me miss Outlook desktop. It’s crazy old and big, and for basic email stuff, its web app counterpart is much faster.

But anyway in the enterprise sector, it doesn’t matter whether an app is web or native, it will be slow regardless lol.

jl6 1143 days ago [-]
And just to confirm the forces in play here: enterprises care primarily about business outcomes of software, license cost, and support risk, with end-user experience being very far down the priority list except for a very few productivity applications where UI responsiveness actually matters for increasing employee output (fewer than you’d think). In short, the users aren’t the customers.
tharne 1143 days ago [-]
Yup. That's exactly why enterprise software almost universally sucks.

This could really be applied to any good or service where the purchaser is not the end user. For example, in the U.S. dealing with your health insurance company is a nightmare, and a lot of that has to do with the fact that it's your employer who's the customer. If the health insurance company treats you badly, you can't go with another provider, so they're free to offer terrible service so long as they don't piss of your company's HR department who decides which health plans to go with.

branko_d 1142 days ago [-]
> ...except for a very few productivity applications where UI responsiveness actually matters for increasing employee output (fewer than you’d think).

If there is UI, UI responsiveness matters for employee output.

Research that has been done on this topic suggests that increase in UI latency non-linearly decreases user productivity, whith the ultimate effect on the cost of doing business.

And that has been known for decades - take a look at the "The Economic Value of Rapid Response Time" from 1982:

https://jlelliotton.blogspot.com/p/the-economic-value-of-rap...

It's puzzling to me why businisses still don't prioritize UI latency, but it's not a rational decision.

Perhaps it's just human nature, as hinted in the linked article:

"...few executives are aware that such a balance is economically and technically feasible."

selimthegrim 1142 days ago [-]
Can someone explain why from the mobile version of Outlook (OWA) I can't send an email marked with Urgent/High priority/importance?
silly-silly 1141 days ago [-]
Thats only for managers.
astrostl 1138 days ago [-]
nitpick. Yes, this is me nitpicking XD
pdimitar 1143 days ago [-]
I have no good theory about why that is except that maybe more and more business people are under the illusion that "software is being increasingly commoditized" which is of course not true.
yourapostasy 1141 days ago [-]
> ..."software is being increasingly commoditized"...

I only wish this were true; then value propositions for software could climb a value ladder. The challenge is business' are not standardized beyond some very basic functions, and new standardization comes at a brutally high cost (time and expense). So I see where office productivity has settled on Microsoft Office (though even there, I see huge fragmentation between versions, how people don't use styles, how most people have no idea of pivot tables in Excel, etc.), and we've pretty much just crawled along at a snails pace since then.

If anything, judging by how little I can transplant of business processes that emerge around software from one company to another, and how much those processes mutate over time, I would assert software standardization is getting worse, because getting businesses to standardize even when moving to the cloud has been a bigger challenge than I anticipated.

mattgreenrocks 1143 days ago [-]
Their perception sort of perpetuates it.

I've seen devs arguing this, though IMO that is more the devs speaking out of resignation and learning to say the right things rather than the truth.

radiator 1142 days ago [-]
Is it possible to use the web only as a platform to deliver the newest version of your native application?

- User visits website - downloads binary (preferably small size, use an appropriate language and cross-platform graphics library) - launches it (preferably without installation) - Perhaps creation of a local storage directory on the file system is needed the first time. - and voilà!

What would be the main obstacles to such a workflow? Are there projects who try work like this?

rayiner 1142 days ago [-]
Zoom?
jgalentine007 1143 days ago [-]
It is awful, but there are some positive tradeoffs like security and flexibility. For example, there have been a zillion vulnerabilities with native Office over the years. Visual Studio is a terrible pain to skin or customize its look and feel compared to VS Code.
fxtentacle 1143 days ago [-]
I feel the same way. We have way too many people working on tooling who don't know how to properly make things fast.

On some days, I manage to type faster than XCode can display the letters on screen. There is no excuse for that with a 3 GHz CPU.

And yes, 200ms seems plausible to me:

Bluetooth adds delay over PS2 (about 28ms). DisplayPort adds delay over VGA. LCD screens need to buffer internally. Most even buffer 2-3 frames for motion smoothing (= 50ms). And suddenly you have 78 ms in hardware delay.

If the app you're using is Electron or the like, then the click will be buffered for 1 frame, then there's the click handler, then 1 frame of delay until the DOM is updated and another frame of delay for redraw. Maybe add 1 more frame for the Windows compositor. So that's 83ms in software-caused delay.

So I'd estimate a minimum of 161ms of latency if you use an Electron-based app with a wireless mouse on a DisplayPort-connected LCD screen, i.e. VSCode on my Mac.

PaulHoule 1143 days ago [-]
The IDE is an extreme case of user interface.

You type in a letter and that starts off a cascade of computations, incremental compilation, table lookups, and such to support syntax highlighting, completion, etc. and then it updates whatever parts of a dynamic UI (the user decides which widgets are on the screen and where) need to be updated.

It almost has to be done in a "managed language" whether that is Emacs Lisp, Java, etc. and is likely to have an extension facility that might let the user add updating operations that could use unbounded time and space. (I am wary to add any plug-ins to Eclipse)

I usually use a powerful Windows laptop and notice that IDE responsiveness is very much affected by the power state: if I turn down the power use because it is getting too warm for my lap, the keypress lag increases greatly.

ycombobreaker 1142 days ago [-]
If kicking off incremental conpilation is causing the IDE's UI to behave sluggishly, then the IDE is wrong. The incremental compilation or other value-adds (relative to a text exitor) should not create perceptible regressions.

Table lookups for syntax-highlighting can't be backgrounded, but they should be trivial im comparison to stuff like compilation, intellisense, etc.

mattgreenrocks 1143 days ago [-]
I'm a bit of a language geek but I've always been confused by IDE lag, so I figure there's something I don't know.

From a UX perspective, I can see doing simple syntax highlighting on the UI thread...so long as it is something with small, bounded execution time. I don't quite get why completions and other stuff lags the UI thread, as it seems obvious that looking that information up is expensive. I can't tell if that is what's happening, or there's something more going on, such as coordinating the communication between UI/worker threads becomes costly.

I've seen it in a bunch of IDEs though, especially those in managed languages. You're typing, it goes to show a completion, and then....you wait.

wincy 1143 days ago [-]
I’m amazed at how much faster Rider seems to be than Visual Studio at its own game. Intellisense is way slower than the C# IDE made by the people who make Resharper. Resharper in visual studio is always really slow though.
deergomoo 1143 days ago [-]
> DisplayPort adds delay over VGA

Surely VGA would have more latency than DP for an LCD? It's gotta convert from digital to analogue and then back to digital again at the other end.

Is the overhead of the protocol really greater than that? (genuine question)

fxtentacle 1143 days ago [-]
I meant to compare DP+LCD vs. VGA+CRT.

But to answer your question, digital to analogue and analogue to digital conversions tend to be so fast that you don't notice. It is more of a convention thing that most VGA devices will display the image as the signal arrives, which means they have almost no latency. DP devices, on the other hand, tend to cache the image, do processing on the entire frame, and only then start the presentation.

As a result, for VGA the latency can be less than the time that it takes to send the entire picture through the wire. For DP, it always is at least one full transmission time of latency.

mrob 1143 days ago [-]
DP does not require buffering the entire frame. Data is sent as "micro packets". Each micro packet may include a maximum of 64 link symbols, and each link symbol is made up of 8 bits encoded as 8b/10b. The slowest supported link symbol clock is 1.62Gb/s, so even considering protocol overhead there are always millions of micro packets per second.

If the required video data rate is lower than the link symbol rate the micro packets are stuffed with dummy data to make up the difference, and up to four micro packets may be sent in parallel over separate lanes, so some buffering is required, but this need only add a few microseconds of latency, which is not perceptible. Of course it's possible for bad implementations to add more, but the protocol was designed to support low latency.

fxtentacle 1142 days ago [-]
Thank you for teaching me something new :) I didn't know about micro-packets before.

In that case, I'm guessing the latency is coming from the fact that most LCD screens are caching one full image so that they can re-scale it in case the incoming video resolution isn't identical with the display's native resolution.

I vaguely remember there being an experimental NVIDIA feature to force scaling onto the GPU in hopes of reducing lag, but not sure that ever got released.

datagram 1142 days ago [-]
To be fair, it's only "almost no latency" if you just care about the pixels at the top of the screen. Since CRTs (and LCDs) draw the image over the course of a full frame, it's more fair to say 8.3ms, since that's when the middle of the screen will be drawn (at 60Hz). This is pretty comparable to modern gaming monitors, which have around 8.5-10ms of input delay @60Hz.

Where CRTs do have an advantage over LCDs is response time, which is generally a few ms even on the best monitors but basically nonexistent on CRTs.

But overall, a good monitor is only about half a frame worse than a CRT in terms of latency if you account for response time. At higher refresh rates it's even less of an issue; I'm not aware aware of any CRTs that can do high refresh rates at useful resolutions.

Got my numbers by glancing at a few RTINGS.com reviews: https://www.rtings.com/monitor/reviews/best/by-usage/gaming

user-the-name 1143 days ago [-]
Conversions between analog and digital happen in nanoseconds. They happen as the signal is sent.
baxuz 1142 days ago [-]
MacOS' compositor is waaay worse than Windows'. On MacOS everything feels like it's lagging for 200ms.
api 1143 days ago [-]
161ms is longer than it takes to ping half way around the world. Amazing.
fxtentacle 1143 days ago [-]
That's why most people don't notice any performance issues with Google Stadia / Geforce Now. They are conditioned to endure 100+ ms of latency for everything, so an additional 9ms of internet transmission delay from the datacenter into your house is barely noticeable.
aembleton 1143 days ago [-]
161 ms is 1/6th of a second which I would have thought would be noticeable and yet I haven't noticed it. I assume that is mouse clicks?

I'm sure Id notice if typing had that much lag on vs code. I am using manjaro Linux but I can't imagine that it would be much faster than osx.

anonymoushn 1143 days ago [-]
Fighting gamers are generally able to block overhead attacks (so they see the attack and successfully react by going from blocking low to blocking high, after waiting for the delay caused by software and the LCD monitor and their own input device) that take 20 frames or more. That's 333ms. So I think if you were really paying attention to the input delay instead of trying to write software you would end up noticing delays around the 160ms level, idk.
Daho0n 1142 days ago [-]
333ms is ages! I can react way faster than that on a touchscreen. I bet you can too:

https://humanbenchmark.com/tests/reactiontime

anonymoushn 1142 days ago [-]
Yes. The players are trying to react to a bunch of other things, not just 1 possible move. It's in this context that 20 frames is the cutoff where moves start to be considered "fake" (i.e. getting hit is an unforced error)
aembleton 1143 days ago [-]
Just trying in VS Code again, and there does seem to be a lag for mouse clicks. Not sure if its as much as 1/6s, but probably 1/10. Typing though looks as snappy as any terminal.

I get electron or MS have optimised the typing path. I don't click that much in VS Code so I don't think its ever bothered me.

hedgehog 1142 days ago [-]
Typing in VSCode is high latency as well, I find it viscerally unpleasant to use solely due to this. There's already a ticket: https://github.com/Microsoft/vscode/issues/27378
Hikikomori 1143 days ago [-]
And some video games with good hardware manages less than 20-30ms button to pixel response.
boogies 1143 days ago [-]
> Maybe add 1 more frame for the Windows compositor.

Months ago I noticed picom causing issues with keynav I was too lazy to find a (proper, pretty-window-shadow retaining) fix for, so I just killed it and — while I can’t confidently say I remember noticing a significant lag decrease — I can say I don’t really miss it (and my CPU, RAM, and electricity use almost certainly decreased by some small fractions).

anthk 1143 days ago [-]
Being a Go/C/Scheme coder makes me not tied to an ide, and it runs fast. Zero latency.
fxtentacle 1143 days ago [-]
I just used IDEs as an example. You'll have the same latency issues with WhatsApp, Signal, Slack, Deezer, for example.
higerordermap 1143 days ago [-]
Being an anti social GNU/Xorg*/SystemD/Archlinux nerd means I don't have to use any of those.

* - actually it could be Wayland but doesn't work with my old window manager config.

est 1143 days ago [-]
> Everything feels like it has at least 200ms delay injected, on every transition. I'd honestly pay extra for an iPhone

If you are using Android, you are in luck.

1. Open Settings > About Phone, Tap the build number 7 times (Or google other methods to open Developer menu for your phone model)

2. Go to Developer options -> Drawing

3. Set all animation scale to 0.5x

You'd be amazed to find how fast the phone appears

rbanffy 1143 days ago [-]
You pretty much nailed it here. It's not speed proper. It's the perception of speed. What the iPhone mastered was the transition starting right away. If you have no transition, the time to start, say, the mail app, will appear long, but since you started the icon blowing up to cover the screen right after your finger press was detected (by your brain) the delay feels shorter because you see something is happening. It's merely cosmetic - the app is still starting during the animation - but , to the user, the animation is part of the process.
Nullabillity 1143 days ago [-]
Err, seems like you got it the wrong way around. That was initially the reason, but these days the animation ends up taking much longer than the actual processing. GP's workaround changes the animation durations to be somewhat closer to the actual time required.

But even that's overkill for modern phones. I just tried turning off animations entirely, and things still feel pretty much instant, despite the phone being a few years old at this point.

rbanffy 1143 days ago [-]
I guess my phone doesn't have enough speed to make lack of animations feel instantaneous ;-)

In any case, the animation shouldn't take longer than it takes to start the program.

GuB-42 1143 days ago [-]
I actually went back to normal speed. Sure, fast animations, but it makes stuttering more noticeable because there isn't a slow animation to cover them up. My phone is a bit old, maybe that's worth it if you have one of the latest flagships with plenty of computing power.
nivenkos 1143 days ago [-]
You can also disable animations in the same settings, but I found it broke some applications.
tony0x02 1143 days ago [-]
TY! I used to put my phone into low battery mode sometimes just to get the speed up from disabled animations.
PaulHoule 1143 days ago [-]
I have Phillips Hue and Sengled lights at home and I usually disable the "easing" animation on them to reduce the perception of time delay when I push the button... It is maybe 100 of ms of perceived latency I can subtract.

It help a lot in that "computer user bill of rights" issue that you start to worry at some point that the button press wasn't registered and might then mash the button with unpredictable effects.

(e.g. you might get more customer satisfaction from a crosswalk button that doesn't do anything at all except 'click' instantaneously)

lfowles 1143 days ago [-]
Funny, because I purposefully bought dimmer switches for bathrooms in my house that added a bit of ramp up time when turning lights on! (Makes it less jarring to turn on the bathroom lights at 2am with just that fraction of a second)
bombcar 1143 days ago [-]
How do you disable the easing?
Griffinsauce 1143 days ago [-]
This is the first thing I do when I get a new phone. How the default is as sluggish as it is is beyond me.
raminyt 1143 days ago [-]
This is just a PSA to warn people that this can fail: I just tried this in my lunch break. I have LOS 17.1 on surnia (old, I know).

These settings completely disabled my on-screen home button and other UI elements, and setting the anim scale back to 1.0 and rebooting did not fix that, no more home button for now.

I probably have to reset the phone, did not find any further info so far on how to fix it (pointers, anyone?). But the UI seemed snappy indeed at 0.5 ...

Edit: "other UI elements" including e.g. the Tab switcher in the Lightning browser. The widgets are all displayed, but totally unresponsive.

raminyt 1143 days ago [-]
Solved (?) - I booted into TWRP and rebooted again from there, and the UI elements work again. (No clue what the exact problem was.)
HenryBemis 1143 days ago [-]
IT Crowd - Have You Tried Turning It Off And On Again?

https://www.youtube.com/watch?v=nn2FB1P_Mn8

pdimitar 1143 days ago [-]
Oh yeah, I am aware of that but not using Android for 4 years now. But I think I'll buy a cheap Xiaomi device and play with Android again. Xiaomi optimize their phones quite a bit (even if you have to fight with their ROM to be less spyware).
zwirbl 1143 days ago [-]
I'd rather wait a small bit every time than getting a full blown spyphone, but scaling animation times down does improve the feel quite a lot
pdimitar 1143 days ago [-]
Fair, but it won't be my main device. Still, you have a point.
heywherelogingo 1143 days ago [-]
Is OnePlus also in this category?
Daho0n 1142 days ago [-]
No. Not perfect, not bad.
Pxtl 1143 days ago [-]
Oooh, thanks for this. I just applied the animation scale on my Pixel 4A and it feels so much peppier.
kharak 1143 days ago [-]
Thank you. Feels like a new phone. Disabled all animations.
spdionis 1143 days ago [-]
I definitely hear you. As a heavy gamer myself, and a person who likes to do things fast to avoid slowing down my train of thought, our current tools are insanely slow.

The researchers telling me I don't notice 100ms delays are smoking something. Yes, human reaction time is 200ms on average but we process information much faster than that. Moreover, the delays make it impossible to do "learned" chains of actions cause of the constant interruptions.

Hackers typing insanely fast and windows popping up everywhere in movies? The reason why that looks very unrealistic is just that our tools do not behave like that at all.

pdimitar 1143 days ago [-]
Those researchers never played Quake2 / Quake3 / Unreal Tournament.

You can absolutely detect when your ping gets above 25ms even. It can't be missed.

> Hackers typing insanely fast and windows popping up everywhere in movies? The reason why that looks very unrealistic is just that our tools do not behave like that at all.

Right on. That's why, even though I have an insanely pretty Apple display (on the iMac Pro) I move more and more of my day work to the terminal. Those movie UIs are achievable.

Related: I invest a lot of time and energy into learning my every tool's keyboard shortcuts. This increases productivity.

kmeisthax 1143 days ago [-]
I would argue that it's more noticeable in those older games where they weren't using lag compensation and you had to lead your shots in order to hit other players. If you're testing on a game which has rollback netcode then lag matters less because the game is literally hiding it from you.

What task is actually being measured here matters, too. For example, while it is true that humans cannot generally react faster than 100ms or so; most actual skills being tested by competitive gameplay are not pure reaction tests. They are usually some amount of telegraphed stimulus (notice an approaching player, an oncoming platform, etc) followed by an anticipated response. Humans are extremely sensitive to latency specifically because they need to time responses to those stimuli - not because they score really well in snap reaction tests.

Concrete example: the window to L-cancel in Melee is really small - far smaller than humanly possible to hit if this was purely a matter of reaction times. Of course, no player actually hits that window, because it's humanly impossible. They don't see their character hit the ground and then press L. They instead press L several frames in advance so that by the time their finger presses the trigger, their character has just hit the ground and made the window. Now, if I go ahead and add two frames of total lag to the display chain, all of their anticipated reactions will be too late and they'll have to retrain for that particular display.

pdimitar 1143 days ago [-]
All true. IMO the point is that people actually made effort for things to both be fast and seem fast. Unlike today.
remram 1143 days ago [-]
And input lag (eg. local, mouse-to-screen lag) gets you before that.
CraigJPerry 1143 days ago [-]
>> Moreover, the delays make it impossible to do "learned" chains of actions

Yeah this resonates for sure. Multiple times per day i tell citrix ctrl+alt+break, down arrow, return (minimise full screen citrix, go to my personal desktop) and about 50% of the time an app inside the citrix session will be delivered the down arrow, return keystrokes :-/

Pxtl 1143 days ago [-]
This. Any application that doesn't properly queue the user inputs gets my eternal hatred. Either your application needs to work at the speed of thought, or it needs to properly queue things so when it catches up it executes my commands in order.

Surprisingly, I find MS Windows native stuff to be head-and-shoulders the best at this queuing.

cma 1142 days ago [-]
The star menu itself seems to fail at this. And pin entry on a locked windows machine seems random whether it accepts the first keystroke as part of the pin or not.
reassembled 1143 days ago [-]
Game developers know how to make smooth and performant UI, to say nothing of the rest of what goes into writing a game engine, particularly a fast GPU-accelerated engine. I’m starting to think it’s primarily a cultural thing, where it’s just become acceptable in the web dev and Electron app world to ship sluggish, resource-intensive apps. I also feel like more corners are cut and performance issues swept under the rug when devs are not staring down the barrel of the hardware on a daily basis.
jiggawatts 1143 days ago [-]
I used to write 4K demos and the like in assembly, and I wrote a 3D engine in the era where you still thought hard about making something a function call or not because... you know... those fractions of a microseconds add up, and next thing you know you've blown your 16.6ms frame time budget!

These days I see people casually adding network hops to web applications like it's nothing. These actually take multiple milliseconds in common scenarios such as cloud hosting on a PaaS. (I measured. Have you?)

At that point it's not even relevant how fast your CPUs are, you're blowing your "time budget" in just a handful of remote function calls.

If you stop and think about it, the "modern" default protocol stack for a simple function consists of:

    - Creating an object graph scattered randomly on the heap
    - Serialising it with dynamic reflection 
      ...to a *text* format!
      ...written into a dynamically resizing buffer
    - Gzip compressing it to another resizing buffer
    - Encrypting it to stop the spies in the data centre
    - Buffering
    - Kernel transition
    - Buffering again in the NIC
    - Router(s)
    - Firewall(s)
    - Load balancer
and then the reverse of the above for the data to be received!

then the forward -- and -- backwards stack -- again -- for the response

If this isn't insanity, I don't know what is...

imtringued 1143 days ago [-]
You're missing the point. You're talking about the fast part which in any well optimized application is never going to be slow enough to matter. The problems start when you sprinkle 0.5MB libraries all over your code base and you start doing an excessive amount of HTTP calls.

What you are doing is like a machinist complaining about a carpenter not measuring everything in thousands of an inch or micrometers. The reality is that wood is soft and can shrink or grow. It's maybe not the best material but it's good enough for the job and it's cheap enough that you can actually afford it.

skohan 1143 days ago [-]
The problem with this analogy is that it makes sense to work with lower quality materials in real life, because the cost savings scale with the number if units you produce.

With web content it’s the exact opposite. Every time you are a bit lazy, and add another mushy, poorly optimized dependency, the cost is paid by every one of your users.

The better analogy is that the web is like an assembly line that serves content. Do you want wooden equipment with poor tolerances making up that assembly line which takes twice as long and occasionally dumps parts on the ground, or do you want a well-optimized system working at peak efficiency?

gonzo41 1143 days ago [-]
You actually want what you can afford. A shitty product in the market beats a great product on localhost.
skohan 1143 days ago [-]
A lot of the problems with web development have nothing to do with time to market. There's no technical reason you could not have a toolset which is just as easy to use, but far more performant.
lostcolony 1142 days ago [-]
So if it isn't easier to use, and less performant, why are these poor toolsets being chosen?
skohan 1142 days ago [-]
History and inertia
lostcolony 1142 days ago [-]
That would explain why they continue to be used after initial adoption. It doesn't explain why they were initially chosen if there were better options using something that already existed.

History and inertia also are nearly synonymous with "easier to use" in this context.

jonathlee 1142 days ago [-]
Because its the new hotness.
forgotmypw17 1143 days ago [-]
You,re pointing the blame at a source of EVEN WORSE performance issues, but it doesn,t remove the slowdown described.

Plain HTML renders several order of magnitudes faster than post-load JS rendering, and yes, it is noticeable, especially if you account for variable connection speeds.

Most web devs develop on localhost and test on some of the best connections you can get today, leaving network performance testing as an afterthought at best... and it shows.

branko_d 1142 days ago [-]
> Plain HTML renders several order of magnitudes faster than post-load JS rendering

Well, "several orders of magnitude" is a bit much, but the point stands.

However, that's only during the initial load. After that, JS can just keep modifying the DOM based on the data retrieved from API, and never download HTML and construct new DOM again. If done properly (and that's a big if!), and where appropriate, this can be much faster.

> Most web devs develop on localhost and test on some of the best connections you can get today, leaving network performance testing as an afterthought at best... and it shows.

Very true! And on beefeir CPUs/GPUs, more RAM, faster storage etc.

For the last couple of years, I've been careful to develop on "midrange" hardware, exactly so I can spot performance problems earlier.

forgotmypw17 1142 days ago [-]
> However, that's only during the initial load.

Primary and by far most frequent use case.

> After that, JS can just keep modifying the DOM based on the data retrieved from API, and never download HTML and construct new DOM again.

And then you can never return to the same page again, it's gone into the either, and the Back button doesn't work properly.

Anyone who doesn't support JS to the level you want? Well, fuck those people, let them make their own wheelchair ramps.

> If done properly (and that's a big if!), and where appropriate, this can be much faster.

A big IF, indeed.

branko_d 1141 days ago [-]
I think you have "document paradigm" in mind.

For "application paradigm" my points stand. That's where JS is appropriate. I did say "where appropriate", after all.

> Primary and by far most frequent use case.

In document paradigm.

> And then you can never return to the same page again, it's gone into the either, and the Back button doesn't work properly.

Not if the client-side routing is done properly. I did say "if done properly".

> Anyone who doesn't support JS to the level you want?

With modern transpilers, you can produce lowest-common-denominator JS. Essentially you are treating JS as a build target / ISA.

> Well, fuck those people, let them make their own wheelchair ramps.

What's the alternative? They can download a native app, but that doesn't work for everyone either (both from the developer and the user perspective).

forgotmypw17 1134 days ago [-]
The alternative is HTML, which is accessible to most.
1143 days ago [-]
gspr 1143 days ago [-]
Hear, hear!

And not only is the stack you describe full of delays, several of the layers are outside of the control of the software in question and can just… fail! Sure, there are cases where I need my software to communicate with the outside world, but I get furious when some page with text on it dies because somewhere in a datacenter some NIC failed and thus the shitty webapp I was viewing fell over.

wruza 1143 days ago [-]
Developers use what is available off the shelf. If there is no easy and straightforward way to send data with a client code over the wire, they will send “function onload() { unjson(await xhr(endpoint, tojson(data))) }”. Blame should go to stupid runtimes, not developers.

You were motivated by submitting a cool demo, they are motivated by not being fired after deadlines. An additional network hop is nothing compared to not shipping.

blacktriangle 1143 days ago [-]
Or there's nobody to blame and we're stuck in a very shitty local maximum. Developers want to deploy to every device on the globe instantaneously, users want to get their software without having to fight with the IT department, and while everybody was looking at the JVM as the runtime to beat the browser was picking up features like some demented katamari.

When I look at the massive backlog of requests from my users, not a single one is "speed."

jiggawatts 1142 days ago [-]
I was referring to API calls between server components of what is essentially a monolithic application.

I've recently come across several such applications that were "split up" for no good reason. Just because it's the current fad to do the microservices thing. Someone liked that fad and decided that over-architecting everything is going to keep them employed.

To clarify: This was strictly worse in every possible way. No shortcuts were taken. No time was saved. Significant time and effort was invested into making the final product much worse.

baybal2 1143 days ago [-]
Hello,

Can you tell what is your occupation? Are you dealing with assembler level programming regularly?

jiggawatts 1143 days ago [-]
Not any more, these days I do various kinds of systems integration work and I still dabble in development, but mostly with high-level languages like C#.

It just grinds me gears that we have all these wonderfully fast computers and we're just throwing the performance away.

My analogy to customers where I consult is this: What you're doing is like buying a dozen sticks of RAM, and then throwing ten of them into the trash. It's like pouring superglue into all but a couple of the switch ports. It's like buying a 64-core CPU and disabling 63 of those cores. It's like putting some of the servers on the Moon instead of next to each other in the same rack.

Said like that, modern development practices and infrastructure architectures suddenly sound as insane as they truly are.

josephg 1143 days ago [-]
I totally agree. I think about it like, you spend $3000 on a computer. $100 goes into actually doing your computing. The rest is thrown away by lazy programmers who can’t be bothered to learn how a profiler works. Most software is written the same way a lazy college student treats their dorm room - all available resources (surfaces) are filled before anything gets cleaned up. Getting a bigger room provides temporary relief before they just make more mess to fill the space.
zwirbl 1143 days ago [-]
Wirth's law is a reality, an awful, horribly annoying one
Pxtl 1143 days ago [-]
"can't be bothered to learn how a profiler works"

To be fair, profiling is way more difficult than it was in the days of single-core local applications. A single-threaded single-machine application means you can get a very clear and simple tree-chart of where your program's time is spent, and the places to optimize are dead obvious.

Even if you're using async/await but are basically mostly releasing the thread and awaiting the response, the end-user experience of that time is the same - they don't give a crap that you're being thoughtful to the processor if it's still 0.5s of file IO before they can do anything, but now the profiler is lying to you and saying "nope, the processor isn't spending any time in that wait, your program is fast!".

toast0 1142 days ago [-]
> To be fair, profiling is way more difficult than it was in the days of single-core local applications.

Not if you graduated from the printf school of profiling[1].

Measure the time when you start something, measure the time when you finish, and print it. Anything that takes too long gets a closer look.

[1] unaffiliated with the printf school of debugging, but coincidentally located at the same campus.

baybal2 1143 days ago [-]
From MCU programmers, I know you can make even a microcontroller run around a Xeon if you know how you can squeeze every cycle of performance, and exploit particularly hard tasks to optimise.

Write a riddle for a CPU with 100% cache miss rate, confusing the prefetcher to clog the memory bus, and enforcing a synchronous memory access. Such thing is very likely to run literally with an MCU speed on an x86 PC CPU.

seer 1143 days ago [-]
Well yea and no, ideally you are not “throwing” that RAM away, you are paying for a more flexible software that can be more easily changed in the future, or to be able to pay much less for your developers, often both.

Nobody wants slow software, its just cheaper, in upfront and maintenance costs. Going with analogies, its like a race car mechanic complaining that a car is using like 3 cylinders where it could have 8. Sure but some people have other priorities I guess.

pdimitar 1143 days ago [-]
> you are paying for a more flexible software that can be more easily changed in the future

In theory yes, in practice this almost never happens. 95% of the teams just quickly mash the product together and peace out before anyone notices what mess did they make. And then you have some poor Indian / African / Eastern European team trying to untangle and improve it.

Seen it literally tens of times over a course of 19 years career.

> Nobody wants slow software, its just cheaper, in upfront and maintenance costs

That is true. But nowadays it's more like taking a loan from the bank and running away to an uninhabited island to avoid paying it off.

mynameisash 1142 days ago [-]
> In theory yes, in practice this almost never happens. 95% of the teams just quickly mash the product together and peace out before anyone notices what mess did they make.

Much of my work is in highly parallelized computing (think Spark across thousands of nodes) processing 10s or 100s of TiB at a time with declarative syntax. It's super cool. Until someone decides they're going to use this one line expression to process data because it's just so easy to write. But it turns out doing that absolutely destroys your performance because the query optimizer now has a black box in the middle of your job graph that it can't reason about.

Bad practices like that occur over and over again, and everyone just figures, "Well, we have a lot of hardware. If the job takes an extra half hour, NBD." Soon, you have scores of jobs that take eight hours to run and everyone starts to become a little uneasy because the infrastructure is starting to fail jobs on account of bad data skew and vertexes exceeding the predefined limits.

How did we get here? We severely over-optimized for engineer time to the detriment of CPU time. Certainly, there is a balance to strike, no doubt. But When writing one line of code versus six (and I'm not being hyperbolic here) becomes preferable to really understanding what your system is doing, you reap what you sow.

On the plus side, I get to come in and make things run 5x, 10x, maybe even 20x faster with very little work. It sometimes feels magical, but it would be preferable if we had some appreciation for not letting our code slowly descend into gross inefficiency.

pdimitar 1140 days ago [-]
Death by a thousand paper cuts. Classic.
seer 1143 days ago [-]
Maybe it didn’t really come across I am totally in the performance camp and love to be able to craft a beautiful, lean and responsive UI if nothing else than for seeing the joy on users’ faces when they are delighted (amazed!) that what they wanted done happened so fast.

But time and time again I see that projects with a fast “enough” interfaces and flexible systems win out on more specialized, faster ones. And I hate that but here we are. Sometime we see a really performant piece of software hit the sweet spot of functionality for a while (for example sublime text) but then get overtaken by a fast enough but more flexible alternative (vacode)

anthk 1143 days ago [-]
>Eastern European

Eastern European coders are highly competent, they did magic back in the day with just a ZX Spectrum.

pdimitar 1143 days ago [-]
As an Eastern European programmer, I agree. A lot of us are called to fix messes left by primadona devs (who are taking home $200K a year for the privilege of making other people's lives a living nightmare).
tharne 1143 days ago [-]
To be fair, most of those "primadona devs", as you call them, would much prefer to write well-designed programs cleanly coded, but are given completely unreasonable timeframes and staffing then told to create an MVP then turn it over to offshore.

Very few people enjoy producing junk, but management (and customers) often demand junk today rather than quality tomorrow.

ridethebike 1142 days ago [-]
Primadona dev here :)

>> most of those "primadona devs", as you call them, would much prefer to write well-designed programs cleanly coded

Most of them - yes. But there's a non-negligible chunk of them who are too careless or incompetent to care about quality - they've been around long enough to gain knowledge about project and get Vice-President title(inflated ego included).

It is especially visible in big banks (I suppose it's typical for other big non-tech corps as well) where tech culture is generally on poor side.

edit: grammar

pdimitar 1143 days ago [-]
Obviously neither me nor you can generalize -- both extremes exist.

Given the chance I'd likely collect a fat paycheck and bail out at the end of the contract as those other people did. But that attitude is responsible for the increasingly awful mess that modern software is becoming.

Almost everyone is at fault, me included. The perverted incentives of today's world are only making things worse.

seer 1143 days ago [-]
Hah true dat. Been my life for the last couple of years :-D Managed to pull through a project that “failed” two times and was 2.5 years behind schedule...
skohan 1143 days ago [-]
Given the state and culture of web development, it's honestly a travesty that most software is consumed via the web currently.

I mean the web stack itself was never designed per se. HTML is essentially a text annotation format, which has been abused to support the needs of arbitrary layouts. The weakness of CSS is evident by how difficult it has been to properly center something within a container until relatively recently. And Javascript was literally designed in a week.

And then in terms of deploying web content, you have this situation where you have multiple browsers which are moving targets, so you can't even really just target raw HTML+CSS+JS if you want to deploy something - you need a tool like webpack to take care of all the compatibility issues, and translate a tool which is actually usable like React into an artifact which will behave predictably across all environments. I don't blame web developers for abusing libraries, because it's almost impossible to strip it all down and work with the raw interfaces.

The whole thing is an enormous hack. If you view your job as a programmer as writing code to drive computer hardware - which is what the true reality of programming is - then web development is so far divorced from that. I think it's a huge problem.

grishka 1143 days ago [-]
What about those weirdos who deliberately choose to use the abomination that the web stack is for desktop apps? To me it feels like they're trying to write real GUI apps in Word macros. I don't think I'll ever understand why.
Pxtl 1143 days ago [-]
The reason is there is an explosion of platforms to support. Back in the '90s, "windows desktop only" was a reasonable business plan.

Now? You need Windows desktop, mobile on 2 different operating systems, web, MacOS, and possibly TV depending on your market.

What's the lowest common denominator? Web stack.

sudosysgen 1143 days ago [-]
There's also Qt :)
radiator 1142 days ago [-]
Or Java
ryandrake 1142 days ago [-]
Or... and I know this is just crazy-talk... there is properly separating your platform-independent business logic from the minimal platform-specific UI layer. A lost art these days it seems.
wruza 1143 days ago [-]
If CorelDRAW were installed on every phone and given same privileges, they’d use that. A new type of browser is like a social network – relatively easy to build one, insanely hard to get it adopted by everyone. The alternative is building for at least 4 different platforms, whose common denomination is usually either a non-barking dog or a vendor-locked monstrosity not even worth considering. And existing web browsers and committees are digging their heels in the status quo.
marcosdumay 1143 days ago [-]
I've met plenty of people that prefer to write GUIs in Excel macros. If all you know about is a hammer...

I only have a problem with the ones among those hammer only people that are proud of not knowing anything else and proclaim everybody not using a hammer for everything stupid, because "look on all those perfected hammers we created! your choice doesn't have such nice ones".

robotnikman 1142 days ago [-]
Oh yeah, I've seen that before. Someone made a random password generator GUI in excel for people to use at one of my previous jobs
skohan 1143 days ago [-]
In some ways I can I understand it, because if you want to deploy a GUI application which mostly consists of text and pictures across multiple platforms, this is probably most viable option in a lot of cases, but the fact that this is the case is a failure of the market and the industry
josephg 1143 days ago [-]
Yep. Native software development houses never invested enough in making a cross platform app toolkit as good as the web. There’s no technical reason why we don’t have something like electron, but lightweight and without javascript. But native-feeling cross platform UI is really hard (like $100M+ hard) and no individual company cares enough to make it happen. I’m sure it would be a great investment for the industry as a whole, but every actor with the resources is incentivised to solve their problems using different approaches. It’s pretty disappointing.
grishka 1143 days ago [-]
I don't think it's at all possible to make cross-platform GUIs that feel native. It's of course fine to share the core of your application across platforms, but you have to make the UI part separately for each platform for a truly nice result. There's no escaping that. And it's not like companies like Slack and Discord lack the resources to do so — they absolutely deliberately continue stubbornly ignoring the fact that, setting aside excessive resource usage, no one likes UIs that look and feel out of place in their OS. They totally have the resources necessary to rewrite their apps to use native UI toolkits on all supported systems.
pdimitar 1143 days ago [-]
I don't know engineers from in there but I am willing to bet $100 that part of them really want to make native OS UIs. It's just that business will never green-light that as a priority.
jamil7 1143 days ago [-]
Although I'm not a huge fan of it, you could argue that Flutter is trying to solve this problem in some ways and has the right backing to be able to pull it off. It unfortunately doesn't feel native though (apart from on Android).
michael1999 1143 days ago [-]
Qt and wxWidgets are still out there. But big money is flowing through the web, so web technologies spread with it.
grishka 1143 days ago [-]
Qt still feels not quite right on macOS — because it draws the controls itself instead of using the native ones. wxWidgets is the best of the bunch, because it apparently does wrap AppKit into itself, but then again, the layouts apps use give away that it's a cross-platform thing.
benhurmarcel 1143 days ago [-]
Because it works everywhere.
timw4mail 1143 days ago [-]
* As long as everywhere is a recent device that can run the latest version of an "evergreen" web browser
sjy 1143 days ago [-]
> The weakness of CSS is evident by how difficult it has been to properly center something within a container until relatively recently … you can't even really just target raw HTML+CSS+JS if you want to deploy something - you need a tool like webpack

This stuff was fixed at least 5 years ago. If you can drop support for IE11 (released in 2013 and no longer supported by Office 365), you’ll find that framework-free web development has improved massively since React was first released. And if you keep it simple and rely on what browsers support natively, you can achieve great performance.

lewispollard 1143 days ago [-]
You'd be surprised how many games up until recently used Flash (Scaleform GFx), and now in some cases HTML5 (edit: Coherent GT/Hummingbird/Gameface) content for game UI.

Rendering hundreds or thousands of meshes and doing complicated 3D math for physics is no problem, UI is still extremely hard and complex, especially if you are supporting multiple arbitrary resolutions for example.

Godot, for example, has a full UI toolkit built in (the Godot editor was made using Godot components). However to actually get it working the way you want in most cases is a horrendous struggle, a struggle with ratios, screen sizes, minimum and maximum UI control sizes, size/growth flags, and before it gets any more complicated please just throw me a Tailwind flex/grid box model instead, because HTML/CSS has solved these problems repeatedly already.

moron4hire 1143 days ago [-]
I've started noticing a weird counter effect. If you make a web app that is snappy and responsive, people just assume your app is trivial. Users have effectively been trained into thinking things like list pagination are "difficult" operations.
sidpatil 1142 days ago [-]
Maybe that's like the tech equivalent of enjoying a loud vehicle because it sounds more powerful than a quieter one. (In reality, the quieter one is more efficient than the louder one.)
arethuza 1143 days ago [-]
VS Code uses Electron and I can't say I've noticed any performance problems with it - indeed it is quite a bit faster for me than its native-code relative Visual Studio.

So responsive Electron apps are certainly possible.

robenkleene 1143 days ago [-]
I'm very interested in the general perception of VS Code being fast, because for me it's slow enough that it's the main reason I use other editors. Here are a couple of examples:

1. It takes nine times as long as Vim to open a minified JavaScript file, and then format it with Prettier: https://twitter.com/robenkleene/status/1285631026648276993

2. It takes 14 times as long to open an empty text file than BBEdit: https://twitter.com/robenkleene/status/1257724392458661889

Both of the above examples revolve around opening files for the first time, and I suspect a lot of the slowness I perceive is because I open a lot of different projects and source code files when I'm working, and this is a bad use of VS Code.

In practice, VS Code behaves more like a multi-language IDE than a text editor. Slow startup times are generally acceptable in IDEs because you're exchanging speed for power. A programmer should ideally be proficient in both an IDE and a text editor, because they're tools applicable to different problems. E.g., VS Code is a terrible choice for things like analyzing log output, formatting large files, testing isolated snippets of code, or working on source code files that aren't part of the same project. I find this to be a shame because VS Code is flexible enough that it would otherwise be excellent for all of these tasks if it were just more performant for some operations that it struggles with now.

arethuza 1143 days ago [-]
Out of interest do you mean starting a new instance of VS Code for those things or using an existing one.

I would agree that VS Code isn't the fastest thing when the editor is starting up, though I find it fine when started. I pretty much always have VS Code running so I don't find this a problem.

robenkleene 1143 days ago [-]
VS Code is already running in both examples.

A lot of the overhead seems to come from making a new window (even though the app itself is already running), although notably most of the time spent in the Prettier example seems to be spent syntax highlighting the JavaScript. If you want to try a direct comparison of opening a file vs. a window, you can see the difference between opening a new file in an existing window (on Mac, `⌘N` / `File > New File`) or new window (on Mac, `⌥⌘N` / `File > New Window`). For me the latter is far slower than the former.

baybal2 1143 days ago [-]
Vs code is an antiexample here.

The whole point for then from the start was to not to repeat the Atom fiasco.

The entirety of the project was running around of making Webkit not suck.

They spent ennormous effort on that.

e_proxus 1143 days ago [-]
That being said, I immediately notice when switching from Sublime to VS Code. It’s something in the key presses...

I think it’s only noticeable if you’ve used a native application for a while. It’s not enough to go from VSC to Sublime and back to VSC again for five minutes. Make an effort to use a native app for a week or a month and then switch back.

disgruntledphd2 1143 days ago [-]
I noticed this a bunch when I moved from emacs to Jupyter notebook.

Emacs will sometimes become slower (especially remote emacs), but it will always buffer your keypresses and do them in the correct order.

Jupyter (for whatever reason), doesn't do this with the result that I ended up wanting to create a new code block, but that keypress got lost and then i end up ruining my original code block.

I 100% noticed the difference, and it was super frustrating (fortunately I left that job, and have managed to avoid Jupyter in the new gig).

pdimitar 1143 days ago [-]
I am using Spacemacs and have spent days trying to make it work faster (I am on macOS). Took a while and some effort but with a few strange tweaks I managed to make it more responsive.

Emacs/Spacemacs can still be weirdly slow sometimes but UI responsiveness is generally miles ahead of all Electron-based software still.

Which makes it even funnier. Emacs is decades old and still uses quite a few ancient techniques that are only hampering it. Even with that, it's still so much better in terms of speed! Funny.

1143 days ago [-]
schmorptron 1143 days ago [-]
Wait, what is the atom fiasco?
foldr 1143 days ago [-]
Atom (https://atom.io/) is another Electron-based text editor release by GitHub (before it was acquired by Microsoft). I think it predated VSCode. It certainly had more mindshare in the early days. But whereas VSCode has always been quite snappy, Atom acquired a reputation for poor performance.
WorldMaker 1142 days ago [-]
> I think it predated VSCode

Yes, and no. They have a really interesting tale of convergent evolution.

Atom was the original Electron app (as pointed out Electron was even originally named "atom-shell"), so it predates VSCode as an Electron app. But the extremely performant "Monaco code editor" that VSCode was built on top of (that forms the heart of VSCode) was started at Microsoft years before to be a code editor in parts of the Azure Portal, and also it was the code editor in IE/Edge dev tools from as far back as IE 9 or 10 I think it was (up until the Chromium Edge). It wasn't packaged into an Electron app until after Atom, but it has an interesting heritage that predates Atom and was built for some of the same reasons that GitHub wanted to build Atom.

(ETA: Monaco's experience especially in IE Dev Tools and the wild west of minified JS dumps it had to work with from day one in that environment is where a lot of its performance came from that led VSCode to jumping Atom on performance out of the gate.)

schmorptron 1142 days ago [-]
Ah, gotcha! I only tried it out once after finding it on flathub, but never used it enough to notice it being slow. Interesting how that developed.

I'm guessing it's pretty much dead now that github is under the same company that also makes vscode, right?

WorldMaker 1142 days ago [-]
Given GitHub's Code Spaces use VSCode rather than Atom, that writing is definitely on the wall, it seems. (Arguably the feature was built for Azure and then rehomed to GitHub where it seems to fit better, but still a stronger indicator brand-wise than most of the other comparative statistics in Atom versus VSCode commit histories and GitHub/Microsoft employee contributions there to, which also seem to indicate that Atom is in maintenance mode.)
zwirbl 1143 days ago [-]
Pretty much like that, I tried Atom once (when I found platform.io and wanted to have a look) and it was just wild how slow it felt. On the upside, it made using those crappy Eclipse forks MCU manufacturers release (like CCC, Dave, etc.) fell a lot less painful
Rapzid 1143 days ago [-]
> another Electron-based text editor

Well electron used to be called "atom shell" :)

foldr 1143 days ago [-]
Ah good point. I didn't know that.
ale_jrb 1143 days ago [-]
I feel like fiasco might be overstating it a little, but basically atom is incredibly slow and this is probably the main reason that it never overtook Sublime and friends in the same way the VS Code did.
tonyedgecombe 1143 days ago [-]
VS Code has a lot of native code and VS is particularly bloated. I'm not sure this is a good comparison.
sime2009 1143 days ago [-]
VS Code has very little native code outside Electron itself.
WorldMaker 1142 days ago [-]
Depends on which languages you work with. Many language servers are written in their own languages so it is possible to work with a lot of native code when using VS Code day to day even if most of VS Code itself isn't native code.

VS Code also used to have far more native code earlier on in its development life, but seems to be transitioning a lot of it to WASM (paralleling the Node ecosystem as a whole moving a lot of performance heavy stuff from native NAPI plugins to WASM boxes; as one example: the major source maps support library moved from JS native to Rust to WASM compiled from Rust, IIRC).

api 1143 days ago [-]
Native UIs could be much, much better. They've been a neglected backwater for 20 years.

Blame OS vendors for refusing to get together to specify a cross-platform standard API for UIs. We have mostly standard APIs for networking, file I/O, even 3D graphics, but not for putting a window on the screen and putting buttons on it.

OS vendors are still trying to play the lock-in game by forcing everyone to write GUI apps for only their platform. This is a non-starter, so everyone goes to Electron.

There are a few third party cross-platform UI libraries around. They suck. Qt is as bloated as HTML-based UIs, and then there's wxWidgets which is ugly and has an awful API based on 1990s MSC.

We could have something better, but it's an extremely large and difficult project and nobody will fund it. OS vendors won't because they don't want cross platform (even though all developers and users do). Nobody else will because nobody pays for dev tools or building blocks. The market has been educated to believe that stuff should all be free-as-in-beer.

anthk 1143 days ago [-]
> Qt is as bloated as HTML-based UIs,

Bullshit. Qt is much faster than Electron, the Mumble client is really fast on my Turion laptop, that with OpenBSD.

And I say this even if I prefer Barnard IRL.

api 1143 days ago [-]
Qt is smaller than Electron, but there are far less bloated HTML5 renderers than the whole giant blob that Electron ships. Compared to those Qt is similarly sized or larger.
kaba0 1142 days ago [-]
Qt is still a program for a single purpose, so it has barely any unnecessary abstraction. Any html renderer will have plenty, because they are browsers first and foremost.
anthk 1143 days ago [-]
You don't need to use QML, and QT5 will be as usable.
kitsunesoba 1142 days ago [-]
The problem with vendor-made cross platform UI libraries are that they:

1) Would need to be lowest-common-denominator by nature

2) Would quickly stagnate due to friction against changes/additions

3) Would have few allowances for platform HIGs

If it were permissible to have vendor specific additions on top of a common core, that could probably work fine otherwise this hypothetical standard UI library would share many of the problems suffered by Qt, wxWidgets, etc.

The other option I could see working is something like SwiftUI, in which some control over the behavior, layout, and presentation is ceded to the platform — basically having developers provide a set of basic specifications rather than instructions for every pixel on-screen.

pdimitar 1143 days ago [-]
It's a complete stalemate. We can't force the OS vendors. The users don't like the status quo but have no choice.

As for the free aspect, I feel like this ship has sailed like 20 years ago. Nobody will pay for an UI toolkit these days. This is not Unreal Engine 4, you know. That stuff only works on AAA games market, apparently (although I am curious as to why it doesn't work everywhere else -- likely thin profit margins and/or middle management greed outside of the gaming genre).

api 1143 days ago [-]
IMHO a good cross platform UI toolkit is about as hard as a decent 3D game engine.

Crazy you say? Start making a list of the features a modern UI toolkit has to have to even be considered for serious projects.

pdimitar 1142 days ago [-]
I'm not disagreeing with you. It's just that today's mindset makes it impossible for people to pay for GUI toolkits alone, I think.
__s 1143 days ago [-]
I don't think young JS devs know nothing else. There are still good programs out there, & you only need to experience it once

I get annoyed with Windows having the cursor randomly stutter for a split second rather than smooth motion. Or Teams taking half a second to load the conversation I clicked on. Or Powershell taking 3 seconds between initial render & giving me a damn prompt. Or the delay between me pressing the Windows button & the start menu appearing. None of these delays exist on my Linux machine where I've had the freedom to select the programs I use

I've made fast UIs with Javascript & React. Like all optimization it comes down to sitting down & profiling. Not taking "this is as fast it it can be" as an answer. In short, saying "Javascript is just slow" is part of the problem

Blaming languages is chasing a fad. I deal with it when people think the service I'm working on in Ruby is going to be slow because Ruby is slow. Nope, architectures are slow. If you know what you're doing Ruby will do just fine at doing nothing, which is really the trick behind speed

pdimitar 1143 days ago [-]
While what you say is fair, let me introduce an additional nuance:

Languages like JS and Ruby make it easier to write slower code (and harder to detect that you're doing it) by the virtue of how their ecosystem and culture turned out with time.

I stood behind the romantic statement of "you are holding it wrong" when I was younger but nowadays it seems to me that the languages live and die by the culture of their communities. It rarely if ever matters if the language itself can be better / faster.

So while I agree JS/Ruby might have undeserved reputation for being slow, I think you should also agree that they are easy targets because observably a lot of software written with them is in fact slow.

I am looking at it empirically / historically while you are postulating a theoretical construct. I don't disagree with you per se but prefer to work with the reality that's in front of me.

---

That being said, kudos for being the exception in the group of the JS devs! The web frontend industry needs much more people like yourself. Keep up the good work. <3

tamrix 1142 days ago [-]
Your windows cursor shouldn't stutter unless you have io interupt problems, bad drivers etc.
snake_case 1143 days ago [-]
I agree that the web is generally more bloated and slow than native apps. However, native apps don't magically become performant by being native.

As an example, my grandmother-in-law has been putting up with Microsoft Jigsaw's desktop app for years. Last time I watched her load it, we sat there for awhile and had to restart multiple times because it was getting stuck loading some advertisements. The startup time was absolutely brutal and the run-time performance while playing wasn't great either, even with a decent laptop.

So when I saw how slow, bloated and laggy this app was, I wanted to try to make her a better jigsaw app for the web and I think I succeeded [1]. It loads almost instantly, has no advertisements, and feels super smooth while playing... and it's mostly just js, svelte and a little bit of Rust WASM.

Anyway, I do prefer a good native app over a web app when available. But with native apps, it's also harder to block ads and other trackers compared to the web.

[1]: https://puzzlepanda.com

pdimitar 1142 days ago [-]
Sure, I'm not denying it. It's just that apparently it's very easy to produce slow-as-molasses UI with JS.

I've been working with the horrors called Windows MFC and Java Swing a long time ago. It was tough but if you did it right (moderately hard) you had a very snappy app on a computer that was 5x slower and had 10x less RAM than a midrange today's Android device.

snake_case 1142 days ago [-]
You're exactly right! Building a slow web app is only one npm install away.

It takes someone who really cares about performance and monitors it to make a fast web app and to keep it that way. Unfortunately it's still too easy to accidentally make it slow.

WorldMaker 1142 days ago [-]
Microsoft probably should revoke Arkadium's right to use their brand name. Arkadium's worst of the worst ads and microtransactions, much less their poor attention to performance detail, really are making Microsoft look bad to a lot of users that just want to play Solitaire/Minesweeper/Jigsaw sometimes.

Especially after the walkbacks that Xbox Game Studios had to do after flak about scummy microtransactions in Halo, Gears, and Forza, it still seems incredible that Microsoft continues to allow Arkadium to do it to a far bigger audience (and a lot of people's parents and grandparents especially) with their brand name attached to it.

bochoh 1142 days ago [-]
I had a chance to play the demo round and it was extremely performant - well done. The only thing I'm not sure about is that on the first click of each piece it automatically orients itself to the final orientation as expected by the puzzle. Is this an "Easy / Medium / Hard" setting? Otherwise great!
snake_case 1142 days ago [-]
Thanks for trying it out!

Yupp, it's on my todo list to give users the option on how difficult they want the rotation to be. So far I have users that want click-to-rotate and even no rotation at all.

npteljes 1143 days ago [-]
Absolutely agree and I loathe the modern UI with passion, for the speed alone. I recently booted up a single-core 900 mhz desktop PC with Windows XP, and it was so fast to respond that it felt like it knew what I wanted even before I pressed the button. Inspiringly smooth man-machine synergy that is rare to come by these days. I'm an old man yelling at cloud.
pdimitar 1143 days ago [-]
And then you have the Apple II computers where the only bottleneck was the diskette drive speed. Stuff was just instant with almost everything you were doing.
hypertele-Xii 1142 days ago [-]
I recently booted an old single-core PC with the latest version of Ubuntu. It ran like a glacier. Every single click took a minimum of 30 seconds to have effect.
C19is20 1143 days ago [-]
I would say that those 'that don't notice certain slowdowns', sadly, may never have experienced anything but slowed down systems.
grishka 1143 days ago [-]
I mean there's now also an entire generation that has never seen the beautiful non-commercialized internet I miss so dearly.

Meanwhile, here I am, making a decentralized social media server and being afraid to add an extra <div> lest it bloats the page.

raindropm 1139 days ago [-]
or an entire generation that will never realize how "doing nothing" or "being bored" is a good thing, or videogames don't require multiplayer or IAP to be fun.

I'm consider myself lucky to be born in the 'transitional period(1980s) I see the world of my parents and also have abilities to adapt with technology.

why_Mr_Anderson 1143 days ago [-]
Long time ago I worked at a hospital and once had to go to a certain department to fix something on computer nurses were using and was horrified how slow the computer and everything was. So I asked around and ladies happily explained their daily morning routine: - turn on computer - do a morning checkup of all patients (around 20 minutes) - when they got back, computer usually finished starting Windows, if not, they waited another 10+ minutes for it to get ready - then they started Word (another 10 minutes) - and opened their main document with notes..or to be exact wanted to open the document. That took another 10 minutes

TL;DR - users can get used to pretty much anything because they don't know it could be so much better

benhurmarcel 1143 days ago [-]
They also don't have a choice.

My company, like many, bloats Windows with security software. We have the type of PC where McAfee uses 80% of resources for an hour every Monday morning. PCs with spinning hard drives take a good 15-20 minutes to fully boot, and some engineers still have those. Those who complain just get told to wait a few years for their planned laptop replacement, to finally get an SSD.

There's no solution, so users just cope.

pdimitar 1143 days ago [-]
I know right? Learned helplessness.
kesor 1143 days ago [-]
z92 1143 days ago [-]
Anybody else remember the speedup loop.

https://thedailywtf.com/articles/The-Speedup-Loop

tl;dr : programmer inserts a large empty loop in a UI, so that in weeks when he achieves nothing, he removes a single zero from the end of the loop counter to speed up things a bit.

rbanffy 1143 days ago [-]
I would expect the compiler to get rid of that loop.
bee_rider 1142 days ago [-]
The story is from 1990. Nowadays you would probably have to be a little bit more clever. Maybe toss in a volatile variable?
rbanffy 1142 days ago [-]
In pretty sure 1990's compilers would do that.
schnable 1143 days ago [-]
Reminds me of an old job writing Windows desktop software. Our flagship app was big and bloated, and it had a long load time with a nice splash screen to distract the user.

We later created a light version for a specific use case, and the product owner came prepared with a nice splash screen for this one too. The app was so lightweight that it loaded near instantaneously - so the engineer added a six second delay just to meet the splash screen requirement.

HenryBemis 1143 days ago [-]
> It looks "pretty" to the UI people.

Buys them time to get stuff done under the hood while you are gazing upon the 'sands of time' (good old Windows hourglass).

It conditions you/me/everyone to be impatient. I opt out of all such transition effects on my phone. I prefer that the screen goes black of freezes until the next screen comes up. This way I don't get distracted by irrelevant junk (spinning wheels, hourglasses, etc.). It is crunching bits. Don't add more junk to it. Let it crunch bits without tricking me.

bluejekyll 1143 days ago [-]
While I agree with you, there’s a reason the current application environments are targeting web rendering engines, it’s cheaper for development. Why develop 3-4 different applications when you can develop 1 with hardly any extra effort?

Chromium is a huge boon to developers for this reason. Now there could have been a different history here. Apple after acquiring NeXT had also gotten OpenStep, https://en.m.wikipedia.org/wiki/OpenStep . OpenStep was a cross platform UI development kit, even the web could be a target. Apple decided (possibly for good reasons, hard to argue with success) to kill this off. But, they had toyed with it, https://www.macrumors.com/2007/06/14/yellow-box-seems-to-exi... . So, Apple had effectively what Chromium has become. A cross-platform development and runtime environment.

Would things be different today if that wasn’t killed off? Would Apple have never come back from the brink of death to become the behemoth it is today, because it would have starved its own platforms? One thing you might have had is a cross-platform “native” UI platform, and that might have meant faster more efficient UIs like you want now.

Shoutout to GNUStep trying to keep the dream alive: https://en.m.wikipedia.org/wiki/GNUstep

Follow up question: maybe with Apple being so successful, now they could revive this and make it profitable for themselves, rather than starving their own platforms?

ip26 1143 days ago [-]
The good news is this means if we make browsers & JS rendering faster, everything gets faster.

The bad news is that doesn't seem likely to happen.

throwaway81523 1143 days ago [-]
I've been convinced for a while that the only sane way to develop any gui app (including web apps) is have game developers in charge. They know how to make stuff run fast, or at least interact snappily.
reader_mode 1143 days ago [-]
If you want buggy crap that's impossible to maintain filled with hacks to make something look like it works - game developers are the right choice. The requirements and practices in that industry are not comparable to standard app development and you would not want anything to do with that for app development.

People here crying about load times and FPS rendering are completely out of touch with reality of SW development - getting stuff to function correctly and reliably with requirements constantly changing > performance, and that's hard enough with tools that simplify SW development. Optimising for performance is a luxury very few can afford.

AnIdiotOnTheNet 1143 days ago [-]
> getting stuff to function correctly and reliably

Hilariously, I wouldn't even say that modern software does that well either.

reader_mode 1143 days ago [-]
But that's my point - it's hard just getting it to work. Getting it to work fast is next level. Games are notorious for garbage tier SW engineering practices, bugs, ship and forget, and it's all about making something look like you'd expect it vs. making it correct - just completely different goals.
hnuser123456 1143 days ago [-]
Half-life 2 is possibly one of the most impressive, in terms of combination of complexity, stability, flexibility and extensibility, pieces of software ever created. It spawned dozens of other games that all sold millions of copies and offered completely different but high-quality experiences. Sure, your typical AAA game isn't near this level of perfection, but your typical non-game software is hardly any better.
kaba0 1142 days ago [-]
I like hl2 and all, but I doubt it would be even close in complexity to a web browser/OS kernel/good performing virtual runtime, like the JVM, compilers. There are insanely complex programs out there.
throwaway7644 1142 days ago [-]
What you are describing is a false dilemma, believe it or not, it is possible to have both performance, maintainability and correctness.

To have performance, you have to understand the data you are working with and how it can transformed efficiently by your hardware. To have maintainability you have to create good abstraction around how you transform your data. To have correctness you have to implement and composite those data transformation in a meaningful way.

All of those things are orthogonal.

reader_mode 1142 days ago [-]
And to have all that with budget and time constraints >90% of SW development is faced with is unrealistic - so guess what - performance is the first tradeoff. Which is why people here lamenting on performance being like this holy grail feature are out of touch with the realities of SW development.
chromanoid 1142 days ago [-]
Budget and know-how is the limiting factor here. You can invest in all of the quality criteria. But is it sustainable business wise?

Game developers usually and rightfully skip maintainability and invest barely enough regarding correctness. Games are like circus performances while business apps should be made to run the circus.

skohan 1143 days ago [-]
I think this is actually something which Apple has done a fairly good job of. I remember even back in 2009 in the early iPhone days, the Cocoa API's were fairly well designed in terms of letting you create responsive, non-blocking UIs on hardware an order of magnitude slower than what we have today.

Game engineers are wizards, but real general-purpose UI is a different problem than they are generally solving. A game UI is typically very limited in terms of what types of information has to be displayed and how. Many applications have to support what is essentially arbitrary 2D content which has to be laid out dynamically at runtime, and this is something different than the problems most games have to solve.

hypertele-Xii 1142 days ago [-]
> Many applications have to support what is essentially arbitrary 2D content which has to be laid out dynamically at runtime, and this is something different than the problems most games have to solve.

That sounds exactly like the problem most games have to solve. The age of fixed CPU speeds and screen resolutions is long gone. Games have to content with a plethora of dimensions along which to represent an interactive, dynamic, multimedia world.

kaba0 1142 days ago [-]
I think OP meant it more in terms of layouting, like vbox can be inside a hbox which has also a text object, and every object can change sizes which will cause a recalculation in everything. It is surprisingly more expensive than the GPU accelerated rendering of many many triangles. Games are complex, but the dimensions question is trivial there.
chromanoid 1143 days ago [-]
Yeah, totally :D https://news.ycombinator.com/item?id=26296339

Most game developers will make it as fast as they have to... in fact most developers do that.

Games are usually developed as abandonware. Do you want your apps to be developed as abandonware?

medstrom 1143 days ago [-]
I disagree: think of how long it takes to bring up the PipBoy in Fallout 3. Or to open a door in Mass Effect. The amount of times I've had my character just be running into the door for multiple seconds before it finally opens...
lfowles 1143 days ago [-]
And then they just end up using a middleware like Coherent[0] which is back to HTML+CSS!

https://coherent-labs.com/

chromanoid 1142 days ago [-]
justin66 1143 days ago [-]
> This might be because I am a former semi-pro Quake3 player but these days I grind my teeth with 95% of all software.

Not really. I'm sure plenty of people remember the quick feel of early PC UIs. Ironically, q3 kind of came at the end of that era.

Some of the same people might even remember when, with a little training, voice recognition software could do its thing without an internet connection and a warehouse full of computers at the other end, on a PC with less RAM than the framebuffer of a modern PC or phone...

veesahni 1143 days ago [-]
Totally feel your pain here. I think a lot of has to do with current JS tech- React, by design, trades off performance for developer efficiency.

I'm sensitive to latency.. first thing I do when I setup a new android phone is go into the developer settings and speed up all animations.

For our own company [0], we also treat speed as a top feature, though it's not something that's easily to market. It's something that power users appreciate. I even wrote a similar blog post [1] to this. The magic number, from what I've found, is 100ms. If you can respond to a user action in 100ms, it feels instant to the user.

0: https://www.enchant.com

1: https://www.enchant.com/speed-is-a-feature

pdimitar 1143 days ago [-]
Sounds like a good company to work in! :)

I would immediately apply but I'm not interested in Ruby or HTML/CSS anymore (although I still know the last two rather well and plan on making a return there to author my own blog theme).

Main focus are Elixir and Rust -- the latter exactly because I want to make efficient and ultra-fast software. Also very invested and averagely skilled in DevOps / sysadmin activities.

I hope there are more companies like yours out there -- and that yours is thriving!

bitwize 1143 days ago [-]
One thing you can do if you're running Linux is to not run a compositing window manager. Use something old-school like fvwm, fluxbox, or WindowMaker. i3 is also good. When the X server draws directly to the display, it is FAST and there is not the delay of at least one frame, possibly several, that compositing WMs have. You run the risk of tearing, but I think most open source X video drivers let you turn tearing off.
cma 1142 days ago [-]
Better to just get a 120hz+ monitor and lower the double buffering delay that way. Sharper clarity while scrolling and or tracking other motion with your eyes with that is worth it.
DennisP 1143 days ago [-]
Back in the early '90s my dad and I used to say in a few decades we'd all have supercomputers on our desks. Now by those standards we do, and everything is still freakin slow. This is not the future we were dreaming about.
bmeski 1143 days ago [-]
It's because we let the product people in.
pdimitar 1142 days ago [-]
They forced themselves in, mostly.
mrh0057 1143 days ago [-]
I’m pretty sure if you gave the devs creating slow ui on the web would create slow native apps too. I’ve created web apps that are on average faster than the desktop apps they replaced. I’m willing to bet nice simple fast programs are way cheaper to write.

Current situation is people creating abstraction at the wrong level and not understanding the performance cost of things like reflection and ORMs.

kaba0 1142 days ago [-]
I mean, ideally there should be absolutely no bearing of usage of ORM on the snappiness of UIs. It should absolutely never block the Ui thread. At most it could add longer “in progress” screens or something, but that is a different topic (also, good ORMs when used correctly, that is the developer actually knows what he/she does and not only blindly copies code, than I doubt they would cause serious overhead. But I agree that non-correct usage can cause problems)
pdimitar 1143 days ago [-]
Yep. Hence my somewhat dismissive quip about "way too many young JS devs who know nothing else".

Kudos to you. We need more people like you.

myth2018 1143 days ago [-]
> We need to get back to native UIs. Is it awfully hard? Yes it is. Do the users care? No, they don't. Many people want fast UIs.

I wouldn't say native UIs necessarily, IMO, but I definitely agree that something has to change.

Current systems are not only getting slower and less useful, but they're also getting harder to develop, test and maintain as well -- and, consequently, buggier.

The fact that there still are many old, TUI-based systems out there AND that users favor them over the newer ones exposes a lesson we've been insisting on overlooking.

pdimitar 1142 days ago [-]
You are correct, it doesn't matter how will the improvement happen as long as it does happen.

If Electron is rewritten in C / Zig / Rust / whatever and becomes much more lightweight then I'll be on board about using it myself.

But the abusive relationship between today's software and what is essentially supercomputers has to be ended and started anew on a more respectful foot.

kaba0 1142 days ago [-]
The problem with electron is not the implementation - after all it is a bundled web browser and those are really really performant and written in C++. They pretty much make displaying a HTML document with complex CSS and running JavaScript as fast as possible (or at least close to it)

The problem is the abstraction level, instead of a locally running program manipulating objects that are turned into render instructions basically in a one-to-one fashion, there is a whole added indirection with generating and parsing HTML and then converting the dynamically created DOM into renderable elements.

Shared404 1143 days ago [-]
Add that to the fact that it's reasonably simple to make a cross platform TUI, and I think you're on to something there. I'm ready to move forward to TUI's over the terrible GUI's we're all stuck with.
myth2018 1142 days ago [-]
Indeed, and this "back to the TUI" I advocate isn't restricted to developer tools. I actually think of such replacement with end users in mind.

Maybe not necessarily something as radical as terminals, but anything providing the same programming ergonomics (in order to be easy to build and maintain) and constrained by the same restrictions (so that functional requirements get tamed).

At first, it would definitely sound as an involution, but I feel somehow confident that the market in general will accept such constraints as soon as the results become evident.

Shared404 1142 days ago [-]
I agree completely. As long as displaying faithful image/video isn't a constraint, I don't see any reason why a TUI/similar would not be acceptable for any given task, after the user gets over the "text is scary" stage.
myth2018 1142 days ago [-]
And even if your application needs to show graphics, you could easily do that on a separate, graphics-enabled pop-up window, while the forms, tables etc. would still be rendered by the TUI engine.
pdimitar 1142 days ago [-]
I'm doing that lately -- very gradually and slowly, but I'm doing it.

I've had enough of today's slow buggy messes that require gigabytes of memory and two CPU cores to show me a splash screen for 10 seconds.

A lot of the TUI apps I stumbled upon seem really well-done.

Shared404 1142 days ago [-]
Out of curiosity, do you have a list? I'm always looking for good replacements.

I'm currently using nvlc and cmus for music playback, and then of course your standard complement of text editors etc. I like Lynx et al. for some web browsing, but compatibility is a pain.

pdimitar 1142 days ago [-]
I just started like 6 months ago but...

- `lazygit` is extremely valuable.

- `lnav` for inspecting log files has turned out to be surprisingly good.

- Do you use `fzf` in tandem with your shell so you can also search your command history (and not just look for files)? I use that for like a year now and can't live without it.

- `mc` for TUI file management has been moderately alright.

- How about `ripgrep`? Can't believe I lived without that one too.

- Rust's tool `skim` (the command is `sk`) in tandem with `ripgrep` or `the_silver_searcher` to very quickly search in file contents in big directories has saved me a ton of time already (although I moved to search file contents in projects in Emacs since). To be fair, you can just use `fzf` instead of `sk` here though; I am just positively biased towards Rust.

- `ripgrep_all` allows you to search in ZIP archives, PDF docs, Office docs/spreadsheets etc. Really useful.

- `ht` is a Rust rewrite of `httpie`, the Python friendlier `curl`. I like `ht` much more because it doesn't incur any startup overhead and started replacing my scraping / syncing scripts with `ht` where applicable which is NOT everywhere because `curl` is extremely powerful and it doesn't often make sense to replace it.

- Command-line or in-terminal charting/plotting: `jp`. I have made a CSV file out of all file sizes on my NAS (bucketed by powers of 2) and then invoked it on the input. Here's a sample CSV from a random directory:

0k,79

1k,6

2k,1

4k,166

8k,34

16k,7

32k,6

64k,3

128k,27

256k,2

512k,2

1M,3

2M,4

4M,8

8M,10

16M,135

Then do this:

`cat THIS_FILE.csv | jp -input csv -xy '[*][0,1]' -type bar -height 57`

And enjoy an in-terminal vertical bar charts. :)

- ...And I have a ton more.

But your question makes me sigh. I really have to start a blog. I am a very practical guy and people usually love my posts (scattered on different forums) where I make such lists. I should roll my own blog static web site generator in Rust I suppose, because the existing ones are either slow or don't support what I need... So, not going to happen in the next year, most likely. :(

Shared404 1142 days ago [-]
I'll have to try some of those out. I've used fzf a little, but haven't really looked at it enough to get the full productivity gains. I've heard of rg ofc, but ripgrep_all has flown under my radar thus far, and actually sounds amazing, I've got a decently large library of pdf's I keep losing stuff in.

The rest I haven't looked at, but will have to add to my list, they fill a couple voids I've been feeling.

> I should roll my own blog static web site generator in Rust I suppose, because the existing ones are either slow or don't support what I need... So, not going to happen in the next year, most likely. :(

It isn't powerful enough to support what you need I'm sure, but I actually did something similar a little while ago.

http://a-shared-404.com/programs/

It's written in Rust, with dependencies on sh and markdown. I'm thinking about adding the ability to automatically execute an (optional) shell script in each directory, so that it would be easier to do things that markdown doesn't.

The code quality is atrocious (first Rust program of any size, and I'm not great at programming in the first place), but it may be useful. If you're interested in me adding that functionality, let me know, it may be the push I need to move it to the top of my pile.

pdimitar 1142 days ago [-]
The `sssss` program might have a potential. But let me give you an example: I want my every blog article to also have versions / revisions and I'd like visitors to be able to read the older versions as well as the last version.

I'd also like multilingual article ability (but I think some of the engines out there can do that). The more I think of it, the more I wonder if it should be something like Ghost.org: namely backed by sqlite and not naked files. But who knows.

Shared404 1142 days ago [-]
Interesting. I haven't done much at all with DB's, so I can't speak as to whether or not that would be more effective.

That being said, I'd be quite interested in reading your blog, whenever you are able to get it going.

oftenwrong 1142 days ago [-]
ridethebike 1143 days ago [-]
I know right, these days I just open browser with Gmail and youtube home page (not even watching a video) and do nothing - 10%+ CPU utilization (i7, 8 cores). Start serfing the web - laptop fans go into overdrive. It's almost like how much $ and how many cores needed to render bunch of text and images without lagging. And that's my home pc, it's blazingly fast compared to one in office with internal enterprise software installed on it.
hypertele-Xii 1142 days ago [-]
What's absolutely mind-boggling to me is that moving my mouse consumes 10% of CPU. I fought tooth and nail to keep my PS2 ports, but everything's USB now. And apparently USB consumes 10% CPU to read mouse movements.
raindropm 1139 days ago [-]
I'm curious about this so I open the task manager and just swing my mouse wildly. It did consume around 16% (from 4% at idle)

WAT.

jeffbee 1143 days ago [-]
There's something wrong with your PC because nobody will be able to reproduce this result.
hnuser123456 1143 days ago [-]
Opened Chrome, opened gmail and youtube, nonstop 25% CPU usage, fans ramped to 4000+RPM, closed chrome, 3% with Firefox with some simple tabs. The culprit seems to be chrome's "software_reporter_tool.exe". It has chewed up 3 minutes of CPU and counting. It seems to have some well-multithreaded elements, it's added 12 seconds of CPU time in 1 second occasionally.
JTbane 1142 days ago [-]
That's Chrome scanning your computer for malware like toolbars and such.
jiofih 1143 days ago [-]
iOS has very fast and snappy transitions which are well suited for touch screens. And as pointed out in the article, it’s one of the few touch devices with <30ms input latency, so it can hardly be beaten by anything else.

It doesn’t feel right with no animation (like Reduced Motion in settings) since spatial hints are lost.

nicbou 1143 days ago [-]
Websites do feel pretty slow, even when they're just a page of text. Caching helps a lot, but so does sending fewer bytes across the wire.

This can be hard to achieve if you work off templates and plugins.

Yet, I find it supremely important. I frequently lose my train of thought while waiting for pages to load.

pdimitar 1142 days ago [-]
I'm not debating whether it's hard or not. I've worked with GUI toolkits some 16-18 years ago. It wasn't a walk in the park indeed but you had the means to produce a very snappy and responsive app.

Can the same be said about Electron-based apps?

I'm too losing my train of thought sometimes waiting for pages/apps to load. It's embarrassing and I'm face-palming.

nicbou 1142 days ago [-]
I'm strictly talking about text-based pages. It's not hard for us, skilled web developers, but it is hard for people who just want to get their content online.
fmakunbound 1143 days ago [-]
I swear I can see the lag while typing into Slack and I feel like it is definitely getting worse the longer Slack has been running. What the hell is going on there? What are we doing wrong as a species to develop software like this?? Shit should be small, fast and simple.
capableweb 1143 days ago [-]
Since we humans can, magically, make shit appear out of nothing by just writing gibberish and passing that gibberish to another program also written in gibberish, and get something that might be useful (probably not), I feel like we as a species are actually doing pretty well.

I agree overall though, most developers/managers of developers/companies who write software fucking suck at their job.

pdimitar 1143 days ago [-]
Agreed, Slack and Teams are particularly egregious examples.
XCSme 1141 days ago [-]
> too much young JS devs who know nothing else

The problem is not the language or framework, is that very few people/devs/businesses actually care about performance anymore, so they just implement the quickest solution without even thinking about the performance impact.

XCSme 1141 days ago [-]
I just realized that even minimizing HN comment threads has like 1s+ of delay.

Profiling: https://i.snipboard.io/UPhtmH.jpg

bdickason 1142 days ago [-]
Agreed. I typically strip every feature possible out of my phone (and run in low power mode) and gravitate towards apps/products that just get out of my way.

When I built my blog, I tried to find every opportunity to reduce cruft (even stripping out extra CSS classes) so reading it would feel as close to a native app as possible.

You could argue that HN succeeded because it's focused on speed above all else.

(Also - fellow former Q1/Q3 player here, I competed in CPL, Quakecon, and a few other events).

marmaduke 1143 days ago [-]
> We need to get back to native UIs. Is it awfully hard? Yes it is

Not sure I agree with this. I wrote a bunch of data vis GUIs with PyQt and Pyqtgraph, all Python, with everything keyboard shortcuts and accelerators, and it was Vim-like speed except where CPU bound by data processing (NumPy).

So I think it can be fairly easy yet Qt dies (frequently, on HN) on the altar of native look/feel/platform (ie doesn’t look/feel like a MacOS app on macOS).

pdimitar 1143 days ago [-]
Not sure what you mean -- but I've never used Qt. I gather it's a controversial topic because I've heard exactly the opposite feedback than yours about it.

Still, I bet if more people used it then its community would have an incentive -- or a kick in the butt -- to quickly fix its deficiencies and it could become the de facto native GUI toolkit? Who knows.

kaba0 1142 days ago [-]
Qt is really quick and is used at plenty of places. Places where (critical) software is needed for internal use will be written mostly as native apps. For example monitoring and the like. And these places often use Qt. I suggest you to try out the Telegram Desktop app (I think mac has a non-qt version as well so be aware). I really like using it for it’s speed as well.

My only gripe with qt is that one has to use C++ or python, other bindings are not

pdimitar 1140 days ago [-]
Oh, I am using the Telegram Lite desktop app. It's a breath of fresh air in the pile of slow Electron-based UIs. I absolutely love it and learned its keyboard shortcuts.
the_gipsy 1143 days ago [-]
That's not fair to "young JS devs" who get inserted into a culture of product slowness.
pdimitar 1143 days ago [-]
I've been fired for taking the long road and preferring good craftsmanship and not idolizing shipping several times a day.

Sadly most people can't afford that and the results are visible everywhere in IT.

1143 days ago [-]
slaymaker1907 1143 days ago [-]
Ok then, everyone will just need to pay 3x as much for software. C and C++ will never return as mainstream UI languages for applications without extreme performance considerations because the cost of developing in such languages is too high. Before anyone gets their hopes up, I've written quite a bit of Rust and don't believe it changes this. Rust's type system is very difficult to teach and learn even compared to other complex type systems. Difficult to teach/learn = $$$. Even after writing a lot of Rust, I'm also still not very fast at writing it compared to my speed in other languages.

The only change we might see are more "native" UIs written in C#, Swift, etc. Also, Swift will not be a suitable replacement in its current form. Any replacement needs to at minimum work on MacOS plus Windows and by work I mean you can create a UI without crazy amounts of platform specific code.

pdimitar 1143 days ago [-]
There are ways to go still, I agree.

But I'd argue that's because nobody wants to invest money and effort.

As a fan of Rust (I'm regularly using it but I don't work for money with it currently) you are right: even if everyone agreed to move to it tonight, that wouldn't change things much because we have no cross-platform native UI toolkit.

Additionally, you might be surprised what prices people could pay for really good software. I personally will pay $500 for a lifetime license of a lightweight, fast, cross-platform and rock-solid office suite. But there's no such thing.

avereveard 1143 days ago [-]
> "back in my day!..." but we have waaaaaaaaaaay too much young JS devs who know nothing else.

funny, java applet in the 90s gained a fame for being slow, being caused mostly by junior devs putting stuff on the UI thread

pdimitar 1143 days ago [-]
I remember those times. To be fair though, there wasn't much of anything else back then...
mcguire 1142 days ago [-]
And cue the traditional response: "Developer time is much more expensive than computer time, so it doesn't make sense to spend any effort optimizing."

:-(

pdimitar 1142 days ago [-]
That might be true in isolation but repeat it enough times and it's actually much cheaper to have several highly paid programmers work on it for several years, compared to so much time and energy lost otherwise...
hddu 1143 days ago [-]
You can disable all motion on Android. One of the first things I do is disable all the animations. Everything responds instantly. It's great.
kmeisthax 1143 days ago [-]
On iOS you can disable animations - it's buried in the Accessibility section but it totally works.
kaba0 1142 days ago [-]
I don’t see the hate for animations when done correctly. I just tried it, and I really prefer the normal mode of operation. On Android I did disable animations because on an older version they are not done as smoothly as on the iPhone, but they can absolutely give many information on what actually happens.
_trampeltier 1143 days ago [-]
I guess thats also the reason why old games in emulators not feel the same.
Daho0n 1142 days ago [-]
>I'd honestly pay extra for an iPhone where I can disable ALL motion, too.

I do that on my Android phone. Feels snappier than the iPhone now. Not perfect though. Scrolling sucks.

JavaScript is bad but nowadays that's not the main evil. That falls on the hideous awful libraries on top of JS that everyone seem to love these days. They need to die ASAP.

habosa 1143 days ago [-]
Speed is still the differentiator on iPhones. After 10 years of Android I switched to iOS and it's like someone greased up the whole experience. I didn't realize how much waiting / stuttering I was taking for granted on Android.

I can never go back to Android now. I'm sure if you studied the phones under a high speed camera we'd be talking about differences of only tens of ms but when you tap something 1000x a day it really adds up. It's just like how most programmers are hyper sensitive to text editor latency.

sullyj3 1143 days ago [-]
Are you sure the perceived difference isn't from switching from a phone that you've been using for a while to a new one? In my experience, both android and iphones feel blazing fast when you pull them out of the box, and sluggish later on when you've been using them for a year or two and they're loaded up with all your junk.
akrain 1143 days ago [-]
As a counterpoint, I recently switched from an old iPhone SE to a Pixel 4a and the experience could not have been any better. I find the vanilla Android to be as snappy as iOS for daily tasks if not better. The problem lies with custom Android ROMs on low spec phones which do hamper the user experience quite a bit
coldtea 1143 days ago [-]
>I find the vanilla Android to be as snappy as iOS for daily tasks if not better.

"As snappy if not better" as an "old iPhone SE" though...

411111111111111 1143 days ago [-]
The issue with android phones is knowing which are optimized for speed.

A Samsung Galaxy for example costs almost as much as an equivalent iphone and is significantly less responsive then some motorola phones with stock android

dividedbyzero 1143 days ago [-]
I've switched from an iPhone 7 to an iPhone 11 Pro last year and while the iPhone 7 never struggled as badly the Androids I had before that (where keypresses might take seconds to register at times in the end), it did struggle somewhat to keep up with me at times, whereas the 11 Pro felt (still feels) like a fast desktop computer, in that it never appears like the UI or everyday operations noticeably tax it at all, the UX in that regard is super smooth. I'd expect the SE to fare a bit worse than the 7, since it had an A9 vs. an A10 SoC.
akrain 1143 days ago [-]
To be honest, it is much better overall. Didn't want to start a flame war here
dagw 1143 days ago [-]
I don't know. I have a Pixel 3a as my personal phone and an iPhone 8 as my work phone and have been using them side by side for years. I honestly cannot say that one is 'faster' or has lower latency than the other.
floatboth 1143 days ago [-]
Good Android phones these days feel great especially because of >60Hz displays. Software for the most part keeps up really well with high refresh rate. The only laggy thing I know of is the Play Store sidebar closing when tapping "my apps & games".
neogodless 1143 days ago [-]
Using a OnePlus 7 Pro, and I tried this, and the sidebar closed instantly. How many apps and games do you have...? :)

(But yeah, this phone is super fast, and the 90Hz screen is a joy. I literally cannot switch to iPhone until/unless they get faster screens, because of post-concussion syndrome and the migraines I get from 60Hz screens.)

CinematicStudio 1143 days ago [-]
I feel you! I'm on Android, and can't believe the morons at google still don't get this! It's insanely slow all the time - I mean, I do have a quad core with 3GB of RAM, it's not top of the top, but still, at this speed, everything should be INSTANT
perryizgr8 1143 days ago [-]
Which android phone were you using and which iphone did you switch to? Asking because I'm very skeptical that iphone and android have any significant difference in daily usage speeds.
Guillaume86 1143 days ago [-]
Fair question, here's some 2019 benchs that found similar latencies between high end Samsung and Apple devices: https://blog.gamebench.net/touch-latency-benchmarks-iphone-x... so it seems some Android manufacturers did manage to catch up finally.
habosa 1141 days ago [-]
I owned many Android phones over the years, beginning with the OG Droid. Had some Samsungs along the way, my last few were Pixels. My final Android phone was a Pixel 3, I switched to an iPhone SE 2020.

Maybe the total time to complete a given task is about the same, but I can seriously say I have never seen the iPhone drop a single frame. The Pixel got choppy all the time. Every Android phone owner knows the feeling of "why is my phone suddenly hot? Oh some runaway background service" or "why is this thing running at 5fps? Oh an app is auto-updating in the background".

iPhone just doesn't have these issues.

piperswe 1143 days ago [-]
Personally I experienced a similar thing switching from a Galaxy S10+ to an iPhone XS.
perryizgr8 1143 days ago [-]
Hmm, fair point since those are similar gen and both flagships.

Personally I switched from an S10 to Iphone 11, and was absolutely repulsed by the horrible screen on the iphone. They both felt similar in terms of UI responsiveness. But due to the screen I went back to the S10.

curist 1143 days ago [-]
perryizgr8 1143 days ago [-]
Exactly, modern android flagships are at par or better than Iphone when it comes to touch latency. Most people who express the sentiment "OMG iphone is so smooth" went from $200 Moto G to $1000 Iphone X. Compare in the same class, and you will find that both OSes are comparable.
dntrkv 1143 days ago [-]
Not sure how you got that from the link. The iPhone devices have 20%+ lower latency.
perryizgr8 1142 days ago [-]
That link is old. There are similar measurements for modern devices available on many review websites. This link does identify the correct metric that people seem to respond to when they feel a phone is "faster".
DarwinMailApp 1143 days ago [-]
I can certainly attest to this.

Every second support email in the early days of https://www.darwinmail.app was from users who were wondering why the website wasn't faster to load and operate.

I knew that this was going to slowly kill the product if I didn't focus on optimising the speed immediately. I also heard somewhere that even a 0.01 increase in load times for Amazon's website would cost them somewhere in the region of 100's of millions.

1. I gathered feedback from all users that said the website was slow (in any way and in any page/component/workflow).

2. I created a Trello board https://trello.com/c/PPuhLtW0/95-upgrade-performance for all the feedback.

3. Since that week of initial performance enhancement research and groundwork, I have essentially been completing todo's on that Trello card and adding more tasks as time goes on. I think the more speed improvements I make, the more I learn about what other parts of the application can be sped up. It's like economics, the more you learn, the more you realise you have so much more to learn :D

A few years later and I have not received an email suggesting to increase the speed of the app in several months, although I continue to make speed improvements on a regular basis.

Netflix have been my source of inspiration here. They are leagues ahead of every other streaming service and their custom architecture placed at the ISP level is absolutely incredible and paramount to how the deliver content with such amazing speeds.

pronoiac 1143 days ago [-]
Hey, you could copy your description from Product Hunt - "Enhance Gmail to get your Google Inbox features back" - and put it on your front page and/or your About page.
mattmanser 1143 days ago [-]
You never heard of a profiler? Logging? You're going about this all wrong.

When fixing performance problems you shouldn't guess, just profile it to find the bottlenecks.

I've seen plenty of performance 'fixes' that weren't, pure guesses by developers that did nothing, when a quick profile immediately revealed the culprit.

In your case you also need to figure out if it's happening server-side or client-side. I generally start with the server-side logs, get a few days/weeks worth of data, find average page request times, plus how much deviation on those requests, then go from there. That gets you the server-side. For client-side, unfortunately it's a lot harder. Google analytics page load speed, for example, is a pile of crap. But, again, there's a profiler in dev tools, remember js compile time is a significant thing and can slow load time too so check that out as well as the actual run times (js compile time shows in the page load graph).

toss1 1143 days ago [-]
Good points about using good tools & analysis techniques - especially to get to latent sources of slowness.

But I'm not sure you can say that he's not profiling - he's using the end users' direct experience as the profiling tool, prioritizing the fixes by greatest annoyance.

Since he let himself get in the mode of being reactive, that's not a bad way out of the hole he dug himself.

Of course the best way is to design your architecture for speed, minimize all code usage & data transfer, use the profiling tools before release candidate status, and prioritizing speed & performance in the QA process.

DarwinMailApp 1143 days ago [-]
More profiling is on the Trello board list ;)

I've done heaps of profiling.. pun intended :D

flavius29663 1143 days ago [-]
profiling is not a replacement for user feedback. You could profile some server side functions and see that X is 5 ms and Y is 8 ms. You will think all good, they're pretty fast. But the user might complain about a feature being slow, say deleting a thread, which happens to call the 5 ms function 50 times for some reason. You would then address the reason for so many calls, rather than optimizing the call itself.

Talking to your users is paramount, at the very least they will indicate where you have to add profiling

GuB-42 1143 days ago [-]
The thing is, the iPhone isn't that fast, but it is able to react quickly to your input by showing you a nice, smooth but slow animation while work is being done in the background. As a result it feels faster.

That's something no other smartphone could do. I don't know how things are today but I looks like Android more or less "solved" the problem by throwing powerful hardware at it.

The killer feature is not really speed, but low input latency. And this is achieved by taking performance in consideration during development. And contrary to the old "premature optimization is the root of all evil" saying, you have to do it early, because while can be relatively easy to increase throughput, latency is much harder to deal with.

This is also part of the success of Google Chrome. While it didn't load pages that much faster than its competition it was great at showing you something quickly. It took ages for Firefox to catch up, and it looks like it did mostly because Chrome became slower over time. How is Servo going BTW?

p_l 1143 days ago [-]
Funnily enough, first few iPhones were ridiculously underpowered, and there was apparently a lot of tricks being thrown to hide that (things you can learn from salty platform developers XD)
neogodless 1143 days ago [-]
I think I agree with a bunch of stuff you posted, but I can't get past this comment.

> That's something no other smartphone could do.

Either I'm misreading you, or you have a strangely narrow version of the world we live in. What is so magical about the iPhone that no other smartphone can "react quickly to your input by showing you a nice, smooth but slow animation while work is being done in the background"?

(Part of my doubt probably comes from using a OnePlus 7 Pro as my daily driver. 90Hz refresh rate and everything is ridiculously fast and smooth. But that's not actually possible, is it?)

GuB-42 1143 days ago [-]
There is nothing magical about the iPhone especially not on the hardware side.

I don't know how but if you look at input response time charts, especially in the early days, the iPhone is among the best, if not the best by a large margin. Less abstraction layers? Better tuned OS? More attention given to latency? More trickery? Apple's level of integration and closed ecosystem certainly helps here, and I can easily imagine Steve Jobs pissing off every single employee that wasn't fired for the smallest hiccup. I am far from an Apple fan and I don't own any of their stuff but I have to admit that on some points, they are really good. And as a developer, I have a lot of respect for those who care about performance.

Your OnePlus 7 Pro is a beast. It is fast and smooth because it has quasi-desktop class hardware inside. That's the "solution" I was referring to.

To be fair, Android did work on smoothness. It was called "project butter". But IMHO, they still didn't manage to match Apple on equivalent hardware. I don't know about the situation right now but I hope everything is smooth considering the ridiculously powerful hardware they put in modern flagships.

conscion 1143 days ago [-]
The iPhone has used a 120hz touch digitizer for a long time, while Android phones have usually used only a 60hz touch digitizer. So while the screen refresh rate was only 60hz, they could start reacting and creating the animation sooner.
auggierose 1143 days ago [-]
The iPhone is really REALLY fast. Especially if you program it in Swift and Metal, instead of Javascript.
kall 1142 days ago [-]
Thankfully JSC is also really REALLY fast, but interacting with websites is slow.
auggierose 1142 days ago [-]
No, sorry, JSC cannot ever be really REALLY fast.
marcosdumay 1143 days ago [-]
> you have to do it early, because while can be relatively easy to increase throughput, latency is much harder to deal with.

Hum... I don't think that makes much sense. Yes, there are some latency optimizations that are certain and architecture wide, so they are much easier to do at first write time, but there are a lot of latency optimizations that are iffy and local, and thus much easier to do with an actual profiler running.

The thing is, throughput optimizations also come on both forms. I'm having a very hard time remembering any large and general enough experience on the ratios, or arriving at a property that would change them for latency or throughput. I think that dimension is really not relevant for them.

lbriner 1143 days ago [-]
I guess the OP should have said that the "perception" of speed is the killer feature.

I find Android is now terrible on both my Samsung Tab 2 and my Galaxy S8. Sometimes I click something and it takes over a second to do any UI changes and looks like it hasn't responded. Just as you go to click it again, it comes up. I find the same in multiple apps where basic actions take too long even simple menu/view apps like email.

I don't know what has happened but it does seem crazy that in 20 years with hardware that is 1000s of times more powerful, we still can't consistently solve click latency.

Maybe it's just me.

grishka 1143 days ago [-]
Android has always allowed the exact same trick iOS does to make it seem that apps launch quickly — show an "outline" of the UI while the real one loads. Though it does take some drawable and theming wizardry to get it right. Some apps, on both platforms, use this to show a splash screen.
user-the-name 1143 days ago [-]
The iPhone is by a good margin the fastest phone on the market, on pretty much any kind of benchmark.

Animations are very seldom used on iOS to hide any work happening in the background. Most things happen instantly, and animations are added for usability, to give spatial hints and make the UI easier to follow.

leadingthenet 1143 days ago [-]
> How is Servo going BTW?

Pretty much dead, unfortunately.

riho 1143 days ago [-]
This is a big reason why I get frustrated with comments about high refresh rate monitors being mostly for gaming, or it not being that important for productivity applications.

There's a reason why it's hard to ever go back, once you've experienced the fluidity of even just your mouse cursor reacting instantly to your movements.

If you've ever used the iPad Pro, there's clearly something special about the experience. It just _feels_ better, and for all the same reasons described in the article.

60hz is far from smooth, and that number is a leftover from days past, not what is actually optimal or good.

Display technologies unfortunately still have ways to go when it comes to high resolution, color accurate panels, with high refresh rates, but the general direction on the market is that high refresh rates are not available in the "productivity" category of monitors, even if sometimes the manufacturer has panels that would fit the bill. You unfortunately always need to look in the gaming category, which usually lack many of the features you'd like in a more productivity centered display. Such as a fully adjustable stand, high color accuracy and viewing angles, virtual display splitting, or just overall design of the enclosure.

I could go on another rant about display enclosure designs... Why isn't there a single company out there (with perhaps the exception of Dell) that's creating nice and minimal display enclosures that aren't covered in cheap plastic and "aesthetic" ornaments? Apple's Cinema Display from 2004 is to this day one of the better looking enclosures out there.

I don't think you can blame this on the consumers really. For the higher end market that I'm talking about in general here, I'd be willing to take a bet on if you build it they will come. I'd certainly be praising any company willing to take this on to high heavens.

I want a great, fast, accurate panel with a nice, minimal, aluminum enclosure. Is that just too much to ask?

TacticalCoder 1143 days ago [-]
> ... of even just your mouse cursor reacting instantly to your movements.

But you probably haven't, except in, well, games?

> 60hz is far from smooth, and that number is a leftover from days past, not what is actually optimal or good.

Well it would already be wonderful if we actually had 60 Hz in modern application / devices, including 16 ms response time. I fired up an old game the other day on my arcade cab (CRT screen), some shoot'em up game, and it was silky smooth. I'm pretty sure it was "only" 60 Hz but it was constantly 60 Hz: any input with the joystick or buttons had results the very next frame.

This felt so much smoother than any of the army of modern devices I'm using on a daily basis: even if they can animate stuff at high refresh rate, the latency before the animation starts is what makes using them painful.

Refresh rate is a thing but so is the latency between when your input and when, visually, it produces a result.

I've seen people working on ports from old arcade game where they'd record using high-speed cameras LEDs physically hooked to the joysticks/buttons to make sure that "input at frame x means response at frame x + 1". Short of that your app very probably is not responding in 16 ms or less, unless you really know what you're doing.

There was this famous rant by John Carmack where he lamented that on PCs it was faster to do a transatlantic ping than to push one pixel to the screen: I don't know how far we've gone, but when I compared modern devices to my old arcade cab and it's measly 60 Hz (but 16 ms latency), I'm still not impressed.

A 120 Hz or 144 Hz or 240 Hz is no good if it takes 35 ms between when you move the mouse and when you see the results on screen: that's not "120 Hz" but 30 Hz. And 30 Hz feels laggier than an 35 years old arcade cab: it is that shameful. 35 years and still feeling more responsive than any productivity app.

I remember a recent tool posted here (I think for OS X: maybe an editor) here by someone who was fed up with this extreme "input lag" and was guaranteeing his program would be answering in less than 16ms (maybe was it 24ms, don't remember exactly). But that is an exception.

I think you're highly underestimating how smooth 60 Hz already is when there's no input lag. Now, of course, I'm taking 120 Hz or more any day over 60 Hz but we should very badly focus on input lag too.

And, sadly, we live in a world where I'd scientifically guesstimate that 99.99% of the programmers are totally unable, due to limitations of their tools (do they have high speed cameras and can they prove how fast things are pushed to the screen?) / knowledge (I'm not John Carmack and modern software stacks sure seems complicated) / languages (let's not start a flame war) / mindset (never optimize anything / 100 MB JavaScript downloads are fine, etc.), to push anything to the screen in 8ms or 4ms.

Except for top-notch game programmers working on AAA titles.

So 240 Hz monitors, sure: bring them up. But bring me too the programmers and tools needed so that in 4 ms I'll see the result of my inputs.

anthk 1143 days ago [-]
>I remember a recent tool posted here (I think for OS X: maybe an editor) here by someone who was fed up with this extreme "input lag" and was guaranteeing his program would be answering in less than 16ms (maybe was it 24ms, don't remember exactly). But that is an exception.

Mac OS 9 Lives praise Mac OS 9 against OSX because of that.

http://macos9lives.com/

ALittleLight 1143 days ago [-]
I've worked on a project that failed and I always felt speed was a real problem. I tried but never succeeded in convincing people that speed was the issue.

In our case, our users had a specific flow through the application they would use, and it worked, but it required clicking many (10+) buttons and waiting for a web request on each. People on the team were satisfied that the flow worked and going through it didn't take TOO long... But what people on our side didn't get is that our customers had to go through this flow dozens if not hundreds of times - some users would need to do it this many times regularly. It effectively made our users hate using the product, or they would refuse to, or they'd use it but only a little bit and they'd try to minimize the cost.

I tried to get people on our side to experience the pain points, e.g. asking PMs to follow this flow one hundred times, and things like that, but I never could get through to anyone that we should redesign and refocus on making it usable. Maybe a mockup of a faster flow was what was needed to be persuasive there.

oehpr 1142 days ago [-]
I've got a team that I am actively trying to convince that this is a problem, and I am scared that I'm failing. We're introducing new components that have deliberately introduced latency to make transitions smoother (and I mean LARGE latency. 500ms large). I brought up the terminal insanity of doing this and got in response "no one has complained about it so far... we can tweak it if you like, but I'm following best practices" (citing nothing).

I bring this up with others, and they are lukewarm about it. I feel like our company is in deeeeep shit if I don't convince people this is a problem.

MarkLowenstein 1142 days ago [-]
There are two levels of slowness being talked about here, both valid. One is that 16ms vs. 60ms response to typing and touch. The other is yours, and I think yours is the more problematic one. Not only do those multi-second waits accumulate through the workday to be a significant fraction of the day, but each one presents an opportunity for the user's mind to wander, or other parallel tasks to be switched to, with a high cost of returning to the current state.
dragontamer 1143 days ago [-]
> When you touched a Razr or a Palm phone, there was a delay. It felt sluggish and slow.

I always felt like resistive screens were more responsive than capacitive screens.

Case in point: My 3DS resistive screen and Palm Centro responded instantly. I think their downside was the necessary use of the stylus (because of the additional precision, their UIs required you to pull out the stylus before you could do anything effective).

What the Apple iPod Touch / iPhone did, was allow you to touch without using a stylus.

Anyway, I read this post as if its a mirror-image of my reality. The one thing I remember about Apple's capacitive push was that it felt slower than what I was used to. Honest.

--------

With that being said: I've played fighting games vs opponents who can 1-frame link and counter-throws within 7-frames (115 milliseconds). I'm well aware of the human-brain's capability to process data far faster than most people realize.

Musicians, Video Game players, Athletes... I expect most of them to have reaction speeds well above average: below the 200ms typical human. Even then, "average" humans have far better reaction speeds and ability to perceive things that happen in factions-of-a-second (at least, once you make them aware of those things).

UI-speed is absolutely a great feature. I just disagree that Apple's iPod Touch or iPhone was a good representation of that.

throwaway189262 1143 days ago [-]
Anecdotal, but speed and refresh rate are really important for gaming. I thought >60fps was a gimmick, but a friend's new screen convinced me otherwise. It's visually obvious up to around 120fps. Moving the mouse around, I can see increase in frame rate to about 200hz.

I upgraded my monitor to 144hz, got a low delay mouse and headset (some headsets have over 400ms delay and audio response is faster than visual!). My ranking in games I've played for years has gone up about 1 standard deviation. I'm at my record high ranking in every game and it continues to rise.

Likely biased study, but Nvidia found an eyebrow raising difference in player performance when using higher refresh rates. https://www.nvidia.com/en-us/geforce/news/geforce-gives-you-...

ThePadawan 1143 days ago [-]
Similar anecdote: I recently investigated tablets to use for drawing.

Everybody online said the non-plus-ultra was the iPad Pro, even compared to other name-brand devices from Samsung/Microsoft.

So I tried them both, and wow. 120fps and a screen optimized for low delay really makes an enormous difference.

With all other tablets, it was more a question of "well how more or less awkward does this feel to use", where that question didn't even come up with the iPad.

I know this sounds like shilling, but I recommend just trying it out on a real device sometime, even or especially if you have no intent of buying one.

throwaway189262 1143 days ago [-]
Gaming stuff is also low delay. My screen tested at 4ms and gaming mouse at 6ms.

Newer studies have shown recognition of events as fast as 13ms. https://news.mit.edu/2014/in-the-blink-of-an-eye-0116

More than 30ms of delay is noticable. My old screen + mouse had a delay of ~50 crudely tested. My old bluetooth headset was over 400ms!

I totally believe you that delay is noticable. I haven't used iPhone, but Android has terrible UI lag virtually everywhere (pointless animations don't help, pro tip you can turn these off in developer options)

coder543 1143 days ago [-]
> My old bluetooth headset was over 400ms!

Which headset did you switch to?

I bought the HyperX CloudX Flight (what a name) wireless gaming headset about three months ago, and I was shocked at how much latency I could feel in something that was supposed to be a dedicated gaming headset.

There's no inherent reason that a wireless headset has to have more latency than a wired headset, analog wireless being the extreme example of no added latency, but a purpose-built wireless headset seems like it would use some digital wireless protocol that is optimized for low latency, instead of buffering something like 100ms of audio in the channel. That ~100ms to ~150ms of latency really impacts reaction times.

So, I could switch to a wired headset... I just wish I could find a wireless headset that didn't suck. Microsoft just recently introduced their new "Xbox Wireless Headset", which looks awesome, but... the absence of any latency specification is not encouraging.

throwaway189262 1140 days ago [-]
Gaming headsets are generally all low delay. HyperX stuff is all 50ms or less. The biggest thing is just don't use bluetooth which has terrible notorious delay.

Proprietary USB dongles for low delay headelsets is standard for foreseeable future.

Don't switch to wired, there's no point. Chances are, your HyperX headset has same latency as a wired connection.

e_proxus 1143 days ago [-]
How do you actually test screen and mouse delay? Is there some good software for testing each in isolation? I know of Is It Snappy? for iOS bu that only measures end-to-end latency.
mnembrini 1143 days ago [-]
It's pretty hard to measure end-to-end delay, Nvidia is only getting to it now with https://www.nvidia.com/en-us/geforce/news/reflex-low-latency...
throwaway189262 1140 days ago [-]
Not easy. You can measure with your phone camera for visual delay but it's still suspect.

I measured my delay using a USB keyboard which are generally assumed to have zero delay but that's not great.

I base most of my delay numbers off of reviewers that have special hardware

aequitas 1143 days ago [-]
I still play games in =<60fps, I see it like training in a gravity room ;)

Jokes aside, I'm happy I haven't had the same experience as you for gaming. Because then I would have to buy into high performance gaming. I can now happily play a game on something like Stadia or my old Macbook without having to feeling something is wrong or missing. Kinda like how watching movies on VHS was fine until HD came along. Now every artifact or resolution drop in a video is an annoyance.

universa1 1143 days ago [-]
Well that mostly depends on the kind of games you play... most Esports titles probably benefit from a higher refresh rate/more fps... While most Singleplayer games, except the occasional shooter, probably don't... With mmo's somewhere in between... It also quickly becomes very technical, as not only the display latency is interesting, but also the input latency.
throwaway189262 1143 days ago [-]
It depends on the game. I play competitive shooters which I've gotten worse at as I age...

PC upgrades have given me 50ms reaction time advantage. Nearly what I lost since my early 20s. Feels nice to be "good" at games again

twic 1143 days ago [-]
My setup gets 20 fps in TF2 on a good day :(.
ksec 1143 days ago [-]
May be Speed isn't the right word, Latency would be better.

We can look at Input Lag [1], and Microsoft Research's work [2] on Touch Input. Apple's ProMotion being part of that as well. For the past 20 years we have make 10 - 100x improvement in bandwidth at the expense of Latency. Now we need to do more work on it . Especially if we want VR or AR which are extremely latency sensitive. John Carmack [3] used to talk a lot about it when he was still working on Oculus. How it was faster sending something hundreds of miles away than showing it on your computer screen.

[1] https://danluu.com/input-lag/

[2] https://www.youtube.com/watch?v=vOvQCPLkPt4

[3] https://danluu.com/latency-mitigation/

draklor40 1143 days ago [-]
Whenever I bring up the topic of performance and speed of software, I used to get "Pre-mature optimization is the root of all evil", but in reality, companies are spending billions just to squeeze an additional 1-2% improvement in compilers optimizations,browser engines, kernels and processors.

Speed matters. I CAN perceive the latency of using an SPA vs using a native application. I notice. the diff. between executing a GNU binary vs running a js based script.

ratww 1143 days ago [-]
> "Pre-mature optimization is the root of all evil"

I agree with your post. We as a community have completely subverted the meaning of this quote. It is originally about the need to profile your code, and about how programmers instincts often fail them, making them optimize the wrong things.

But when it mixed with Startup Culture it morphed into "don't worry about speed, just write whatever shitty code comes into your head and only optimize if a customer complains... scratch that, let's not listen to customer complaints because we know better".

Like you said, some companies with good products and some good developers are following what Knuth had to say and are constantly optimizing for speed (but after profiling). Others are engaged in a race to the bottom and are trying to convince everyone else that careless engineering is somewhat better.

draklor40 1143 days ago [-]
Its easy. You deliver the feature and then optimize its performance.

What they dont say is that there is never an end to set of feature requests you will get. No matter how many crappy, unusable features you throw in, there will always be a request to tweak this , tweak that.....

tasn 1142 days ago [-]
I think you and I interpret "premature optimization is the root of all evil" quite differently. To me it doesn't mean you should write slow code, but rather that you shouldn't omptimise a piece of code before: 1. You actually got it working and you know you need it (so you don't waste your time). 2. You profiled the system and you know the code you are optimising is actually making things slow, and it being slow matters.

This applies to every metric, not just performance. For example: don't optimise your top of the funnel when your bottleneck is actually conversion.

So measure, and then optimise. Don't optimise prematurely.

baxtr 1143 days ago [-]
> Yet teams consistently overlook speed. Instead, they add more features (which ironically make things slower). Products bloat over time and performance goes downhill.

This. Yet, I’d say it’s not the teams. In my experience it’s usually management that demands new features and doesn’t care about speed.

onion2k 1143 days ago [-]
Yet, I’d say it’s not the teams. In my experience it’s usually management that demands new features and doesn’t care about speed.

This line of reasoning makes me sad. It highlights so many problems in a company that a developer is having to deal with;

- Seeing 'management' and 'devs' as opposing teams shows a lack of communication and a lack of understanding from everyone involved.

- A company where managers aren't willing to listen to developers is never going to put out a great product. Developers have expertise and know what they're doing.

- A company where developers think they know best is never going to put out a great product either. Managers also have expertise and know what they're doing.

- If the "managers" dictating that features are added are "higher ups" rather than product managers then the company is never going to put out a great product because the people who talk to the customers and look at usage metrics should be driving the product roadmap. Customer needs should be driving what gets added.

- Developers who aren't putting up a fight to write good, fast code because they're not being listened to stop caring about what they're building, and that means there's very likely to be other problems like significant bugs, tech debt, etc. That just grinds you down and stresses you out.

All in all, if your opinion is "the product I build sucks because managers make it suck" you probably need to find a new job. Not every company is like that. Find a good one.

the_cramer 1143 days ago [-]
"Developers have expertise and know what they're doing."

Unfortunately this is not always the case. Currently there are other "developers" getting access to our sql servers that drag down performance a lot. It looks to me that they may be coming from an OOP world and now trying to force these patterns onto sql which doesn't scale at all.

So developer != developer and not all developers are good imho.

corty 1143 days ago [-]
I think it might be necessary to just frame speed as a problem everyone can relate to, Ferengi as well as developers.

One possibility for speed as in latency would be to pre-agree on a latency budget (as in realtime-systems: if you exceed that deadline, your system has failed). Then have everyone be aware of how they spend that latency budget. Say the latency deadline is at 500ms for your website to full interactivity. Currently you are at 320ms. Marketing wants to include some analytics scripts. Include them in the test page, measure the added latency, then check against your deadline: added 200ms, we are now at 520ms. Do we reject marketing's wishes or do we make the design dept. cut back on their image load times, maybe they can get from 130ms to 90ms? How about investing in better caching to get 100ms?

That way you can discuss numbers and can quantify how something impacts the overall experience. Budgets is something everyone can understand, and taking a big gulp out of a limited budget is something no-one wants to be seen doing.

eska 1143 days ago [-]
A technique I learned from Tanenbaum's operating systems book back in the day, that has served me well over the years, is to come up with the theoretical maximum and compare it to a naive implementation.

For example right now I have to deal with a data interface provided by a partner company. The theoretical limit of the interface should be 1.625 MB/s. If we were to stupidly copy our numeric streaming data over, we would be within our timing budget, and optimizing this would be optional.

However, the implementation of the partner company only reaches 0.1MB/s max. So their "smart" implementation/interface is only 6% efficient, or in other words could be 16 times better. That really helps putting things into perspective, and turns management's bullshit filter on, when the partner company says "you can buy this from us if you want more performance" or "that's just the limit of the hardware".

danjac 1142 days ago [-]
> Customer needs should be driving what gets added.

In enterprise, the "customer" is going to be someone who will almost never touch the product once they've signed off on it. Building something that's a pleasure to use (good UI, responsive etc) for the end-user is usually wayyyy low on their list of priorities.

goatinaboat 1143 days ago [-]
Yet, I’d say it’s not the teams

Speaking of Teams, there is something I don't really understand, which is that we have just experienced nearly a full year of intense competition between Teams, Webex, Zoom, Bluejeans, Skype and all the rest. All of those products should be AMAZING by now. But actually most of them are still clunky as hell, and Teams itself is probably the worst of them, it's still as slow as it was a year ago, still unreliable and still missing (trivial) features that people actually want, like the ability to block/ignore certain contacts. And it can't be - if anyone at Microsoft actually uses it themselves - that they don't know how bad it is. But they seem to be doing absolutely nothing about it.

Nextgrid 1143 days ago [-]
Microsoft Teams isn't designed to compete on technical merit, it's designed to appeal to bean-counters who have either never used anything better, or have not used and will not use the software at all, thus slowness isn't an issue because the people it would affect (the end-users) have no say in the matter.

No opinion on Bluejeans nor Webex but I assume it could be the same as above.

Zoom isn't actually that bad. UX-wise it has some issues but speed-wise it performs better than everything else I've tried (probably helps that it has a native app instead of being browser-based or Electron garbage).

Skype is a consumer product and is left to stagnate more or less. Most likely, they don't see enough profit potential in it to make it actually better (which would involve throwing away the Electron crap and rebuilding a - or dusting off the old - native app).

BlargMcLarg 1143 days ago [-]
> Microsoft Teams isn't designed to compete on technical merit, it's designed to appeal to bean-counters who have either never used anything better, or have not used and will not use the software at all, thus slowness isn't an issue because the people it would affect (the end-users) have no say in the matter.

Quoting for emphasis. It's incredibly telling most people who started WFH since spring 2020, they are nowhere near as adapted or sensitive to this stuff as people who used similar commercial apps with a different target audience (gamers) on the regular.

Teams feels like two steps forward, one step back compared to Skype. It looks more streamlined, business-y and has cool integrations, but so many design choices irk me and the performance is meh.

Nextgrid 1142 days ago [-]
It's not even a 2020 thing.

For most people in the environments where Teams is rolled out, the standard used to be either e-mail (using bloated clients like Outlook) or Skype for Business (formerly Microsoft Office Lync).

The question as to why those previous options can't be as good as the consumer-grade alternatives (such as the social networks they're often using) has already been settled long ago and they've accepted whatever BS answer they've been given.

Compared to what they used previously, Teams is indeed an upgrade (albeit small) and they are unlikely to question its quality as they've already accepted that the tools they use in their enterprise are terrible compared to consumer-grade alternatives they use outside of the enterprise.

The real eye-opener for them would be to try Slack, where they would suddenly realize that office chat doesn't have to suck, though I have to say Slack is doing a great job over the last couple years at catching up to Teams when it comes to terribleness.

MagnumOpus 1142 days ago [-]
> designed to appeal to bean-counters who ... will not use the software at all

Easy conspiracy theory to postulate, but with WFH "bean counters" use video conferencing software just as much as devs - indeed probably far more intensely.

bombcar 1143 days ago [-]
Zoom is frighteningly above all the rest; the speed is there, the background replacement is miles ahead of Bluejeans - if they weren't worth $infinity I'd expect Microsoft to buy them and shove it into Teams somehow.
jorams 1143 days ago [-]
Having used Teams, Zoom, Skype, Google Meet and Jitsi, I'm having a very hard time understanding why people go through the pain of Teams, Zoom or Skype.

Jitsi and Google Meet don't employ any dark patterns, they are fast, they are reliable, they are easy to use. (I do use them only in Chromium.)

Zoom desperately wants me to download their client, even going so far as to require multiple failed attempts before showing the button to join using the browser. Then it wants me to complete a CAPTCHA before letting me join. If I do use the client it opens several separate windows and asks if I want to join using desktop audio. (Of course I do, and if you still want to ask please keep it in the main window.)

o_m 1143 days ago [-]
I've tried to think why this is, and I think it is because no one wants to make things faster because that would be admitting you didn't do it right the first time. So product owners try to hide it under the rug, and they'll focus on things they can present to their own leaders so they are on good terms all the way.
hutzlibu 1143 days ago [-]
I rather think most devs have overpowered hardware for developing - and simply don't notice performance drops directly on consumer hardware.
reassembled 1143 days ago [-]
My friend develops PixelCNC, an OpenGL app for generating 3D tool paths from 2D images, on a netbook because he knows many of his users have under-powered older systems.
iKevinShah 1143 days ago [-]
This is so true. My workstation was i5 2nd generation with my development code running on a partition which was on HDD, the timings for every request were ~200ms to 1s depending on multiple factors. Page performance in chrome was constantly ~1s total.

I upgraded to latest i5 with SSD for data, and the performance drops are basically 0. It bothers me that there is something which needs improvement and I cant check on a commit to commit basis. But yes, it is very easy to oversee performance drops due to new / faster hardware.

theptip 1143 days ago [-]
This is a popular sentiment around here, because we care about the craft and want to build something that is beautiful/well-made.

However, speed is not “the killer feature”. Speed does not add any value in isolation; your app needs to solve a need for the user first. If you don’t have PMF don’t think about speed yet.

The article gestures at objectivity by linking some cases where people measured revenue gains from speed improvements, but fails to follow through and actually propose an experiment or ROI calc. If you think your app is slow, run an experiment and measure the impact on conversion. (You can even take a page from Google’s book and _add_ delay with a simple sleep() if you don’t want to spend any time on perf work before you get data. Or just do the first bit of low-hanging fruit and measure the impact.)

Talk to your users and ask them what frustrates them in the app. It might be “takes so long to check out”, or it might be “it just lacks feature X that competitor Y has”. I’d suggest it’s unwise to spend time on perf work if you are pre-pmf and the main feedback was the latter. Again, do experiments too because customers don’t always tell you what they need. In particular enterprise users often don’t care as much about speed, as long as you tick all of the boxes. Many users are used to line-of-business software that is slow and buggy, so your bar in B2B is not always high here.

Finally do an ROI calculation. If a perf iteration is going to cost you $20k in dev resource, and get you 7% improvement on $10k of monthly revenue, that might not be the right thing to focus on. Ideally you’re looking at features that will improve your top of funnel volume more than that.

It’s all a trade-off. It depends on your company’s level of maturity, Product/Market fit, and the value of the marginal feature that you’d be deferring to make your app go faster.

If we interpret this to be a political manifesto carrying the message “you should care more about speed/performance”, I’d prefer the meta-level “you should care more about trade-offs and marginal value”.

vp8989 1142 days ago [-]
Maybe it's because I am not in "the Valley", but this line of thinking makes absolutely no sense to me. You are speaking in generalities, so you are essentially implying that most people are working on software that has not achieved product market fit?

How could the majority of people collect a salary working on software that has no users? That makes no sense.

theptip 1142 days ago [-]
Hacker News is affiliated with YC, a startup accelerator. While there are lots of FAANG engineers and engineers from all parts of the spectrum here too, this community biases towards startups. If there is a place to discuss strategy for pre-PMF startups on the internet, this is it.

I’m not intending to generalize, quite the opposite. I’m arguing against a generalization that “speed is the killer feature [for all products]”. In my post I presented a few cases where this over-generalization is not true (pre-PMF startup and potentially large B2B product) and suggested a more nuanced and objective analysis of the trade-offs.

> You are speaking in generalities, so you are essentially implying that most people are working on software that has not achieved product market fit?

Perhaps try parsing my post as “here are a few examples of common cases where speed is not the killer feature, and a sketch of a more flexible thought process that will get you to a better answer”.

Product managers at both large and small companies use ROI and experimentation to figure out what to build (though in some ways it’s actually harder at a startup as your sample size can be too small to get statistical significance, or at least to run as many experiments as you’d like).

karpierz 1142 days ago [-]
If you have 10 companies to invest in, each of which has a 10% chance to produce 20x returns, 90% of the engineers at these companies wouldn't really have users.

Basically, if you think that software is feast-or-famine as an investment, then this would make sense.

DylanDmitri 1142 days ago [-]
If all engineers worked at early-stage startups, this would be the case. However most engineers (upwards of 95% outside of Bay Area) work at established companies.
brundolf 1143 days ago [-]
Two points:

1) As programmers we're biased to feel like speed is the most important thing because it's very fun and satisfying to optimize. In reality, for actual users, it's one of many different axes of value that have to be weighed against each other. In some domains it's critical, in some it matters very little, in most cases it's one important factor among many.

2) There are different types of "speed". Generally anything that's supposed to mimic something physical - basic UI feedback, real-time games/simulations, etc - has a much higher speed requirement than some abstract process. Will the process take long enough that it makes sense to show a loading spinner at all? Then the user probably won't mind waiting a couple extra seconds. Will it take <500ms? then the user will approximate it to "instant", and will notice if there's a bit of "lag".

> Phones in 2007 had the same features as the iPhone. The Palm Treo even had a touch screen. The difference was speed.

If the original iPhone had taken twice as long as the Treo to load a web page, but the touch screen was still more responsive, people still would have perceived it as being "faster". The extra seconds matter less than shaving off the extra milliseconds.

sgeisler 1143 days ago [-]
As a former BlackBerry 10 user (QNX based with C++/QT native apps) that's something that annoys me endlessly about Android. How can a simple action like displaying a small, locally cached playlist take any noticeable time at all?! There is no inherent reason for building Apps in JS+HTML adding a dozen additional layers all costing precious time. Even some "native" (Java) apps seem slow at times. Also switching between apps often causes these to be effectively closed, adding startup time when reopening them (are they really that memory-hungry, why?!). I never had these problems with my BB 10 phones even though these had half the RAM (2G, current Android 4G) and way less cores (2 vs. 8). I wish they hadn't discontinued this awesome platform.
tempestn 1143 days ago [-]
I would argue that the key differentiator of the first iPhone was screen size. It was the first popular phone where essentially the entire face of the phone was dedicated to screen, made possible by a software keyboard. By today's standards it's tiny, but at the time nothing else came close. Trying to do anything on any other phone was impossibly cramped by comparison. Especially using the web, since there was no such thing as a mobile or responsive page then, so you needed a phone with the screen real-estate to use desktop websites. The iPhone was the first phone to make this less than utterly painful.

All that said, I do agree with the general thesis. Evernote has just come up with a huge update of all their apps, having ported them all to Electron to standardize development. The only problem is, they're all brutally slow compared to the native apps that preceded them, and it truly ruins the experience.

Tepix 1143 days ago [-]
The iPhone was also the first mobile phone with a touchscreen that worked really well with just your fingers.
trymas 1143 days ago [-]
Also AFAIK, iPhone was first mainstream device with multi-touch.

At the time I had limited experience, but personally I thought touch screens would never work, because they were slow, imprecise and unresponsive (often worked with special pen only) and then iPhone came with buttery smooth experience and multi-touch. Mind blown.

baybal2 1143 days ago [-]
No, HTC was the first one to explore a touch only interface.

Even before the HTC Touch, their WinMo version allowed for touch only operation for 3-4 years.

dagw 1143 days ago [-]
And before that there was the Ericsson R380. None of them could really be described as working "really well".

If you wanted to you could perhaps argue that the PalmPilot was the first touch only portable device that worked "really well", but that wasn't a phone (and you'll have lots of angry Newton fans telling you that are wrong). Or you could try to make an argument for the Treo, but it wasn't really "touch only".

As someone who has used every device mentioned above (and owned at least half of them), I personally feel comfortable calling the iPhone the first phone with a touch only interface that worked "really well"

baybal2 1143 days ago [-]
The fact that HTC had a working touch shell, and basic suite of touch only apps for WinMo few years prior to Apple stands there.

I will also argue that Apple verbatim copied some of their UI ideas.

tempestn 1142 days ago [-]
I briefly owned an HTC Titan before the iPhone came out. I returned it after a few days and got a flip phone instead. It was just too painful trying to do anything with the finicky stylus on the tiny screen. And yes, it was slow too.
renewiltord 1142 days ago [-]
The iPhone's release was the first concrete time I instantly noticed that tech enthusiasts are shit about understanding tech products. Every normal person described it in glowing terms but /r/technology, Slashdot, and every damned tech enthusiast community spent most of the time talking about Reality Distortion Fields and how Nokia had this or that feature and the iPhone couldn't copy-paste.

Stopped listening to these people for product expertise. Even took a chance on Facebook at $19 when HN was gleefully expounding on how this was obvious and the company was doomed. Glad I did that.

Did it again when everyone on HN was convinced SMCI was spying for China. Worked out again.

I'm going to call it "Tech Enthusiast Inverse Sentiment Index" TEISI. List it on the NYSE and people can make big money doing the opposite of people here. Maybe you get a couple of losses like WeWork and whatever but overall, I think you win.

Siira 1142 days ago [-]
Confirmation bias, survivorship bias, etc. Make that index if you aren’t just bullshitting.
renewiltord 1142 days ago [-]
It's a joke. I'm not enough of an idiot to create a trivially manipulable security.
lordnacho 1143 days ago [-]
Slowdown has actually been the only reason I ever replaced a phone. Somehow the manufacturer sent an update and stuff turned to molasses, and that has been my trigger to get a new phone each time. There's no excuse for this, it's not like the apps I use are especially demanding. I've written a few apps on the side, and most of the ones I use should just be your average mashup of buttons, pictures, and REST calls. Besides, the slowdown comes when the OS is updated, so it's probably not the apps changing.

I'll never get another Samsung, even though I don't know if they did it deliberately, or if it was even them that did it.

Somehow my current phone has lasted 3 years with no appreciable slowing.

ancarda 1143 days ago [-]
This is one reason why I prefer command line programs and websites like SourceHut and HackerNews over GitHub and Reddit. Also, why I disable or reduce as many animations as I can in graphical software I use.

Everything is just too slow -- and it doesn't need to be.

StillBored 1142 days ago [-]
Speed, is a minimum requirement of most systems, same as correctness.

It seems to me that basically 100% of the UI/UX developers at the big tech companies are woefully ignorant of the fact that there is a massive amount of data and papers written about human computer interaction. I'm guessing that is because few comp-sci programs even touch the topic, rather spending all their time on more esoteric/mathematical topics.

In summary, a very large number of studies were done in the 1960s-1980's on the _human_ aspects of user responsiveness (important when timesharing became common), how people learned computer interfaces, and how effective they were at operating them. Despite some of these papers being > 40 years old, none of it has really changed because the studies were about humans, less than computers. The underlying computing may have changed from a time shared terminal to a phone in someones hand connected to a server, but in that time the human cognitive loop hasn't changed.

IMHO, and somewhat backed by the science, any system which isn't responding in under 100ms is broken unless its performing something extraordinary. If its actually interactive (like typing on a command prompt) even that is far to slow. User frustration, and loss of attention are real things, and you can bet when given the choice users will pick less frustrating systems. The saving grace for many of these platforms is that the entire industry is trying to be like the fashion industry and follow the latest trends. So it doesn't matter if BigCoX makes a huge UI blunder all the others will follow it down the lemming hole.

So tell me why some of the conclusions in a paper like http://larch-www.lcs.mit.edu/~corbato/sjcc62/ (1962) are wrong. How about: http://yusufarslan.net/sites/yusufarslan.net/files/upload/co... (1968)

Amusingly other classics like https://www.microsoft.com/en-us/research/wp-content/uploads/... are discovered regularly too (1983).

bdickason 1142 days ago [-]
Author of the post here - I had no idea there were so many more people out there like me passionate about speed (and frustrated by how slow apps/devices are these days).

Thanks everyone for sharing really awesome examples in the comments here - from Games to Receipt Printers to Apps, it's clear that speed is valued.

Or... that there's a big opportunity to bring back lightning fast products :)

ZephyrBlu 1143 days ago [-]
All I have to say is that the "what would it be like to live with lag?" is insane.

1/3 of a second is already insane lag, 3 seconds is just ridiculous.

Tepix 1143 days ago [-]
Agree. In VR to achieve "presence" you aim for 20ms of lag, or less. For voice calls you want 100ms max.
collaborative 1143 days ago [-]
The UI has to remain 100% responsive. So put that task in a BG thread and show the user that it is being processed (with a progress bar for extra friendliness). Stay away from frameworks that don't have responsive UI - native is best
felixding 1143 days ago [-]
> ... a Palm phone, there was a delay.

My first impression was "unbelievable" - how on earth would anyone think a Palm device is slow?! Then I followed the link and saw a Palm Tree 750/V... oh, of course, that thing run Windows Mobile.

A Palm device running Palm OS is blazing fast! I switched to iPhone from Treo 650 in 2009. Almost everything became much slower. The iPhone software was slow, so was the user interaction (in the sense of UX).

Palm only started using Windows in its later years. And there were actually very few Windows Palm phones. Most Palm PDAs and phones run Palm OS and were very, very fast.

lucas_membrane 1142 days ago [-]
Nothing to see here.

Speed always has been not only the most important thing, but virtually the only important thing. Back before most of you were born, there was a review (in PC Magazine, IIRC) of the category of spreadsheet programs. MBA Analyst dominated the others (visicalc and lotus 123, IIRC) in every category except speed, in which it was OK, but not great. That's why you never heard of it.

The speed requirement is closely related to the self-importance fallacy. If a computer needs time to think, maybe we could make good use of a few moments pause, too.

clarge1120 1143 days ago [-]
There are different kinds of apps for different use cases. Performance is a feature, but not always necessary.

For example, Line of Business (LOB) apps are built with ROI in mind. LOB apps help businesses run more efficiently, and employ the vast majority of developers. These are the most used apps in the world, and company owners are much more interested in functionality, automation, and distribution of apps than performance and usability.

coldcode 1143 days ago [-]
Our customers wind up spending a lot of time waiting on services to respond watching loaders. Speed on the app is entirely ignored otherwise, you are just happy you got a result. At a previous job people complained about how slow our iPad app was, we measured every single step from tap through service call back to redraw and 90% of the time was in our backend, even with average internet performance.
floatingatoll 1143 days ago [-]
Could someone please show this article to digital board game creators? There are so many great games that waste 50% of my playtime on PowerPoint transitions and smooth movements. It’s so frustrating trying to enjoy a game when you have to watch a ten second animation in order to have a single tile flipped over, or a five second fade just to represent end of turn.
bajsejohannes 1143 days ago [-]
It seems like Apple is moving backwards on this at the moment, though. Perhaps they were more concerned about it when they were trying to get into the market.

Examples I can think of: The emoji selector (ctrl+cmd+space) is quite slow. On my brand new macbook, it's a small noticeable pause, and on my old macbook it's several seconds (during which time keyboard input is lost).

> If you can’t speed up a specific action, you can often fake it. Perceived speed is just as important as actual speed.

Second example is facetime on my iPhone. They fake being fast by showing the last opened screen. For me, it's very often the "most recent calls". The problem is that in the meantime there's been another call. Result: I see the person I want to call back, tap on the screen where they are, observe that the content changes and I call the wrong person. This happens often enough that I should learn, but somehow I don't.

baybal2 1143 days ago [-]
> When you touched a Razr or a Palm phone, there was a delay. It felt sluggish and slow. Apple removed the delay between your finger tapping the screen and something happening. Your finger could finally manipulate the UI in realtime, just like in the real world. It felt magical. If there was even a slight delay, the whole experience fell apart.

A very strange phone to reference. First iPhone was slow as molasses with all of the excessive visual effects.

It was only around OMAP iphones when they first got proper hardware acceleration.

Palms were noticably faster than WinMo 6, and WinMo 6 was faster than 5 which was indeed painful to use because of input lag.

Ironically, Android is still somwhat slower than WinMo 6 on input lag despite every trick Google is throwing on it.

I read somewhere they even tried to wire the input layer directly to hardware acceleration to make scrolling less laggy.

mangoman 1143 days ago [-]
In the US, everywhere I've lived, Comcast has been king. But their new TV Boxes are so fucking slow I can't stand it. 2 second delays just to type a number. If any competitor was smart, they'd invest into their boxes' speed and just destroy comcast on that alone.
anthk 1143 days ago [-]
Replace Comcast with Orange in Spain, and the same experience with an Android TV Box.
ulisesrmzroche 1142 days ago [-]
This is Bullshit with a capital B. The killer feature of the iPhone were looks and apps.

It was the first all-in-one (camera, music player, phone, game system, organizer, etc) that didn’t make you a bully target.

Not saying Speed is unimportant...I’m saying this is straight up lying.

Like back in the AOL days, when dialup was a thing, the internet was dogshit slow, but you still had to get in line to use it. Took hours more often than not.

If people valued speed more than anything, aol would have gone bankrupt. People are willing to pay extra for speed but can live without it as long as features are there.

This is starting to bug me because for startups, this is bad advice. It’s actually harmful since it’s all about product-market fit at the beginning. You’re better off throwing away code instead of optimizing.

nromiun 1143 days ago [-]
This is spot on. I was originally using PayTM to pay my bills for phones and TV. And while it was a little bloated and sluggish there was nothing better. But then Google Pay was released, and it was so much faster then anything else on the market.

But Google Pay released a new update using the flutter framework. And now even scrolling takes ages to complete. I complained on Play Store but the reply said to check my internet speed.

Meanwhile PayTM has also redesigned their app, but unlike Google Pay their updates actually made the app much faster and intuitive. I still check Google Pay from time to time to see if they have fixed their app, but the scrolling is still laggy (it feels like you are in a web page) and the loading page still flickers.

bombcar 1142 days ago [-]
>(it feels like you are in a web page)

Few apps are native anymore, they're all just wrappers around web pages. It sucks.

seanwilson 1143 days ago [-]
> Speed during Checkout - Every second of page load time kills conversion rates. A 1 second delay reduces conversion rate by 7%.

I think it's fine to say faster page loading makes users happier and will increase conversions but you should avoid generalising with such specific figures (I see this often with page speed article titles where they mention conversion rates changes to 4 significant figures). It's going to vary wildly based on the product, audience, price, exclusivity, custom loyalty etc. and you'll get diminishing returns as well.

The impact page speed has on amazon.com conversions isn't going to be the same as on your side-project website for lots of reasons.

thitcanh 1143 days ago [-]
Speed is half of the picture sometimes. I once managed to book a flight on Google Flights and basically completed the purchase in less than a minute. An airline’s website could load instantly and still wouldn’t compare to just having a decent checkout experience.

It’s incredible that nobody has 1-click flight bookings a-la-Amazon yet.

Theodores 1143 days ago [-]
The metrics this is based on need to be known.

If you deliberately add a second to the checkout and measure the conversion rate it will go down. But to then talk of reducing latency creates this much extra conversion rate is a lie. There is an oft touted figure from when they deliberately slowed the BBC website to assess engagement.

However, truth is that speed is good.

1143 days ago [-]
tome 1143 days ago [-]
The word "latency" is mentioned three times in the article, "speed" fifteen. Yet latency is actually the more precise and accurate word for the concept the article is trying to communicate isn't it?
CinematicStudio 1143 days ago [-]
Agreed 100%! I've redesigned the UI of my timeline (for a video editor) several times, in order to constantly make it faster.

It's painful (for me, that is :D), but I know it's the right thing to do.

gnyman 1142 days ago [-]
If you have a iPhone without home button, go to settings, wallet & apple pay and uncheck doouble click side button

Now turn off the screen with the power button.

Notice the annoying delay when turning off the screen is gone? enjoy :-)

Of course you also don't have a way to invoke the wallet manually, but luckily if you put it near a payment terminal it will auto activate

Probably won't work if you're a heavy apple wallet user but if you use it only sporadically I personally think it's worth it, I found the delay very annoying when I switched to a homekeyless iphone

ska 1142 days ago [-]
I think this is overly reductive.

Speed (or more likely, perceived speed) is only one part of UX, and how much it matters depends a lot on what else is going on and the users expectation. Even focusing merely on responsiveness feels a bit superficial.

Something a bit closer to the core of it is that whenever a user is focused on waiting for your software, it reduces their experience. That can be articulated better I'm sure - and still is only one part of the (complex) equation.

bjarneh 1143 days ago [-]
Isolated we all agree on this, i.e. speed is important. But you constantly see high praise for many of the technologies that facilitate this "slowness" creeping into apps and websites. With each level of abstraction we loose speed; either it's languages that "compile" to other languages, or ORM's, or frameworks that solve different tasks, but when stacked on top of each other; everything feels like mud..
tommilukkarinen 1143 days ago [-]
It's long time ago so I don't remember well, but at least the camera was not fast (I was working with camera stuff at the point so that's what I paid attention to). It was slow, barren of features and looked like developed hastily by a student.

The thing with iPhone was the capasitive screen, which made touch UI work. At the point I had already worked with phone touch UI:s for seven years, and that's the thing that felt like magic.

dystroy 1142 days ago [-]
Developers and marketers often overestimate how much the users will love the impressive and slow effects they pack their products with.

I was reminded by this today as I installed Debian on a new computer. Why do Gnome makers imagine it's OK to have the *default* on slow ("Animations") rather than instant ? Do they really think we'll be happy enjoying a 200ms or 500ms delay every time we reduce or open a window ?

pul 1143 days ago [-]
I often wonder if my obsession with speed is helping me forward, or holding me back. (I'm working on https://www.nslookup.io on the side.) There's so much else to be done. Will users really care enough to come back? Is a 20% speed up worth more than a better design or an additional landing page? I don't know...
mrvenkman 1143 days ago [-]
The "ipod" and "ipod Touch" were the reason the iPhone was successful. The speed and reaction time was important too - but I wouldn't call it a "killer feature" - it was necessary because phones weren't slow - actually there was nothing particularly slow about about the RAZR. I don't understand the comparison.
phkahler 1143 days ago [-]
>> Does your checkout page take 10+ seconds to load? Did you have to wait for a loading indicator multiple times along the way?

Those aren't even the right questions.

Change 10 seconds to 1 for the checkout page. And then ask if they ever have to watch a loading indicator. We have no hope if we dont set good goals.

sverhagen 1143 days ago [-]
Speed is important. And when something is near or completely unusable, that's a bug. It is also a quality attribute that we architect for, to some, limited extent. Otherwise, speed is "just" a feature like any other, and Product Management should tell me where it ranks in priority.
fairity 1140 days ago [-]
Does anyone know of a reputable study on how load speed affects conversion rate? I’ve heard many people claim that small increases in page load speeds have been shown to have outsized impact on conversion but have never seen actual studies with proper controls.
benlivengood 1143 days ago [-]
I'm always amused when I need to use FVWM or xfce on old hardware and it's snappier and more responsive than Gnome on newer hardware. About the only thing old hardware can't do is smooth scrolling/resizing/moving and that's all GPU.
fvwmuser 1141 days ago [-]
There's nothing stopping you using FVWM or XFCE on newer hardware either.
buf 1143 days ago [-]
As a Notion user, I feel this pain daily. The lack of an offline support or a modern fast search is going to push me away.

For my personal notes, I'm still organizing it in local files (via vimwiki), but for team notes, Notion needs to step up its game.

lifeisstillgood 1143 days ago [-]
The lag video is fascinating - I see people who are labelled clumsy, unco-ordinated - but maybe they just have a mental lag. After all the world we 'see' with our eyes is a mental model, a 3D game world anyway.
MR4D 1140 days ago [-]
But not in finance apparently: https://news.ycombinator.com/item?id=26345982

Oddly the links were right next to each other.

bob1029 1143 days ago [-]
Speed is a tricky thing in a complex application. You are ultimately going to be forced to trade latency for horizontal scalability in non-trivial applications with lots of shared state which must also be consumed globally.

You can cheat in some weird and fun ways though. For instance, if you say "no user of this system will ever be more than 50ms away", you get to play some really interesting games with vertical scaling and consolidation of compute capability in an all-in way. I.e. server-side technologies ran out of a single datacenter near the userbase.

If your latency domain fits it, something like Blazor server side can be an incredible experience for your users. First load is almost immediate because there's virtually no client code. Everything is incremental from there. If you are within 50ms of the server, UI feels instant in my experience. The nature of how applications are developed with this tech means that if your business services are completing requests within the performance budget, you can be almost certain the end user will see the same.

Going to the bottom of the rabbit hole, understanding how NUMA impacts performance can make 5+ orders of magnitude difference in latency and throughput. Letting a thread warm up on a hot path and keeping it fed with well-structured data is foundational to ultra-low-latency processing.

You can handle well over a million events per second on a single thread on any recent PC using techniques such as LMAX disruptor combined with a web hosting technology like Kestrel. The difference between a class and a struct can be 10x if you get to that level of optimization. I measure user interactions in microseconds in my stack these days.

A millisecond is a fucking eternity. You shouldn't be doing a bunch of back and forth bullshit in that kind of latency domain. Stream client events to server as fast as possible, microbatch and process, prepare final DOM changeset and send to client all at once. How could any other client-server architecture be faster than this, especially if we are forced to care about a bucket of shared state?

jmacjmac 1143 days ago [-]
I think when you don't have a competitor, being slow is okay. People will use your product but otherwise performance matters. Eventhough it never matters as much as your feature set.
flavius29663 1143 days ago [-]
> it never matters as much as your feature set

How do you explain then that iphones took over the market, even though Nokias had many more features? Speed, or the feeling of speed, was part of it, I am sure

jmacjmac 1142 days ago [-]
I didn't mean "more features are better". Sometimes, even a single killer feature is better than many features. Speed might be one of the reasons Iphone's success but how about this : If Iphone's were %20 slower than they were, would it take over the market ? I think it would. This doesn't mean being fast didn't help them. My point is, being fast is not enough all alone but being fast with a good feature set is great.
PTOB 1143 days ago [-]
I am a heavy AutoCAD user. I can type commands faster than today's AutoCAD can grab them. Sometimes it garbles them and executes an alphabetically adjacent command...
hnnotreddit 1143 days ago [-]
I remember when animations were used in UI for the purpose of masking wait times. On the new web, they're so misused they cause the wait now.
ChrisMarshallNY 1143 days ago [-]
This is absolutely spot-on.

That said, I feel like it is sort of belaboring the obvious.

I think that our overdependence on dependencies has a lot to do with UI latency.

IshKebab 1143 days ago [-]
This feels like a bit of history rewriting. Yeah the iPhone was fast, but the real killer feature was the huge responsive screen. No phone until then had had a screen so big, or such a good touchscreen. You could browse desktop web sites! Early Android phones were very slow and janky, yet they still succeeded. There is mountains of enterprise software that is successful despite being insanely slow (cough Teamcenter).

Not saying I necessarily disagree with the premise but they chose a poor example.

bambax 1143 days ago [-]
All true. Speed is what users want. Not fancy graphics and certainly not endless confirmations and security assessments.
swyx 1143 days ago [-]
obviously the HN crowd is in favor of speed, but i would argue some of his examples are proof that speed doesn't matter compared to others. Notion is horrendously slow and i dont understand how other people can choose to use it, but speed clearly isnt even a necessary condition to become a successful product.
tuckerpo 1143 days ago [-]
Mandatory Handmade shill

https://handmade.network/

digitaltrees 1142 days ago [-]
For me it was safari. A real web browser. That was the game changer.
m463 1143 days ago [-]
less than .1 second response and you are interactive which is a big deal.
RocketSyntax 1143 days ago [-]
Speed is a cop out. Engineers love to focus on performance.
golemiprague 1142 days ago [-]
Tell it to Tom Brady... Speed is one component of the package but not everything, there are other factors
fireeyed 1143 days ago [-]
Front end JavaScript framework scourge introduced a lot of this.
iaml 1143 days ago [-]
Netflix is literally using react and is praised in this thread for their ui performance. JS is not the problem, what you do with it is.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 06:25:14 GMT+0000 (Coordinated Universal Time) with Vercel.