The nature of testing may have changed.
For one, the approach to testing is different with more smaller iterations and more powerful automation.
For another, better automation has split testing into more of an automated test developer/pipeline plumber part and an exploratory (manually testing the juicy stuff, the boring routine is handled by automation) part.
Like every other job, you've to keep yourself updated with the latest.
Also, when it comes to things like Self Driving, the line between a QA/dev is really not there. Both of them are just writing software since testing such a thing can't be a manual process.
In general, in a AAA studio when I think of "QA" I think of so much more than just game testing for end users.
We have things like:
* Daily smokes for content creators. Embedded QA doing manual (and sometimes partially automated) smokes of core game features as well as our proprietary editor and other tools multiple times a day before new binaries get rolled out to content creators, with the sole purpose of making sure that artists are able to actually work with the build(s).
* _A lot_ of automated runtime testing and automated unit tests (some teams are better with this than others) which QA helps monitor. This can help test various combinations of weapons etc very quickly. Tests for everything from locomotion to weapon-switching, damage output validation, etc.
* Cert testing - before we send a build to MS or Sony our own QA has to go through various certification requirements and approve it. This will include testing not just game features but loading times, build metadata, various security measures, etc.
* Engine integration testing...when game teams integrate new engine drops or features (or when features are even integrated between engine streams and the main release line) things like editor, environment framework, other tools and relevant features, etc are tested on an example data set by QA.
* Full playthrough testing. I don't know for sure which QA department/team handles full playthrough testing and various iterations thereof but I suspect it's our remote QA-specific studio vs in-house QA (although I'm sure they do some of that too).
* And more that I'm not thinking of off the top of my head right now.
And then of course we have the regular team playtests where the entire team plays certain maps in development and files bugs/provides feedback/etc - having a few dozen to a few hundred devs actually playing the thing helps catch various issues with different scenario/equipment combinations, map problems, etc.
To be fair, we’re not really in a tech hot zone so that might be why we struggle.
The same goes for duckduckgo
We don't have QA because customer service marketing and engineering do all QA. Correcting each other's mistake through a discourse.
Most of that design is created by UX/UK expert then fronted guys follow exactly that.
Why waste additional money on QA.