NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
A Few More Reasons Rust Compiles Slowly (pingcap.com)
ajtjp 1389 days ago [-]
As a relative neophyte in Rust (have gone through about half the chapters in the Rust Book), I recently deployed a small Rust server in DigitalOcean, and was surprised by the compilation speed. The server's code was about 27 KB in size, producing a binary of about 160 KB in size. But it had 92 dependencies, including transitive dependencies, and took 5 minutes and 38 seconds to build. Which was quite impressive relative to the size of the code, even allowing for the not-super-fast CPU.

While I was watching its output, I realized that in Rust, all the dependencies are compiled; I'm used to Java+Maven or JavaScript+NPM where compiled dependencies are used instead, and that tends to be pretty quick (provided your network pipe is wide enough). I'd be curious to learn why Cargo re-compiles from scratch instead of offering pre-compiled binaries as well. I guess part of it is related to different target platforms, but it seems like if the top 5 platforms were targeted and had compiled resources available, you could reduce compile times for those platforms by a significant amount.

On the other hand, the error messages I ran into along the way were quite good at pointing me in the right direction about how to fix them, which saved more time than the extra compile time cost, relative to the Python-based alternative I had been trying to set up before that.

steveklabnik 1389 days ago [-]
It's not just target platforms, it's also build flags; you could do the default for debug and release, but set any sort of custom option and you're back to square 1.

We would also need to pay for the cost and such of building, hosting, and distributing all of that...

There are other middle grounds too. There's certainly interest, it's just not trivial. If it was, we'd do it!

matthewaveryusa 1389 days ago [-]
Just out of curiosity, How does rust prevent the Gentoo issue of: you can compile with custom flags, but most projects have only been tested with specific flags and if you change them they will break the build, so really, no customer flags. It always starts off with custom flags and ends up ossifying to the default-only flags.
ohazi 1389 days ago [-]
Most of the flags tell the Rust compiler what strategies to use when producing machine code. Actual Rust code rarely looks at these flags, so correct interpretation of the program generally isn't affected by them.

Contrast this with C and C++, where it's common to #ifdef in an entirely different program depending on the flags.

It's certainly possible to do something like this in Rust, but in practice it's rare.

Matthias247 1389 days ago [-]
Based on my experience with feature flags on some projects that made extensive use of them Rust is definitely also vulnerable to this issue. In the projects also not all configurations and feature combinations had been tested, and some specific configurations caused compilation to break.

Good CI configuration can definitely help to catch these issues, but it's obviously an extra effort for projects to set up. The alternative could be to simply avoid feature flags and always compile everything, which depending on the project might or might not be the better option.

kibwen 1389 days ago [-]
I'm guessing that when the parent says "custom options" they're referring to libraries providing their own customization options via "crate features" (https://doc.rust-lang.org/cargo/reference/features.html ). Not only do libraries have an incentive to test that their own feature flags work, but also these feature flags avoid combinatorial explosion (and are easier to test) because feature flags must be additive due to how Cargo is designed: every feature flag must function the same even in the presence of any combination of the rest (flags must not be exclusive with each other).

On the other hand, I'm guessing that by "custom flags" you're referring to compiler-level flags that influence codegen, which rustc doesn't have that many of, and most of the ones that rustc does have are for controlling things that people might reasonably expect to be nontrivial work to change in the first place (e.g. linker/symbol options, cross-compilation/platform options, LLVM/gritty optimization/instrumentation options). Of the rustc codegen options that ordinary users might want to play around with, I see only two: one for turning off arithmetic overflow checks (which makes all integers act like their overflowing equivalents from the stdlib), and one for determining the general strategy of what happens when a panic occurs. The latter is unlikely to cause problems because the default behavior is a superset of the configurable behavior, and for the former any silent problems in misconfigured users would manifest as panics for everyone else, so there's a good chance the problem would be fixed upstream regardless.

steveklabnik 1388 days ago [-]
I meant basically all the stuff the other three comments said :)
pjmlp 1389 days ago [-]
Have you watched Swift related announcements at WWDC? Package manager support for binary dependencies is coming with Big Sur.
steveklabnik 1389 days ago [-]
Not yet.

The most valuable company in the world has significantly more resources than we do.

jka 1389 days ago [-]
With hope for the presence of positive-minded folks with contribution time to spare: any suggestions of areas that could use help towards binary builds?
steveklabnik 1389 days ago [-]
I would reach out to the Cargo team: https://www.rust-lang.org/governance/teams/dev-tools
littlestymaar 1389 days ago [-]
Apple controlling the hardware makes it way easier though. At home, I compile Rust with CPU-specific features on about as many different hardware as Apple for their whole ecosystem!
masklinn 1389 days ago [-]
> Apple controlling the hardware makes it way easier though.

I'm not sure that's really relevant.

Rather:

* apple has resources, lots

* apple wants to promote the use of swift, making it easier and more convenient is a good way to do that, binary dependencies reduce the complexity of the build process because you don't need to build dependencies

* apple has a vibrant ecosystem of small closed-source shops, binary-only distribution is useful for those, as well as for themselves

* promoting binary and eventually dynamically linked dependencies might mean the ability to dedup' on-system dependencies

O_H_E 1389 days ago [-]
I believe what they meant was that Apple needs to support less hardware variations. Compilers frequently have a host of flags that depend on CPU features which can potentially make the code behave slightly differently. If cargo was to implement this, they need to compile each package many times (with different configs) more than apple need to do so for Swift.

And yes of course it is given that apple's resources is vast to say the least.

masklinn 1389 days ago [-]
> I believe what they meant was that Apple needs to support less hardware variations.

I understand that very well. My point is that they wouldn’t be doing it regardless if they didn’t want to, and if they really want to (for reasons I outlined) they have the resources to make it happen essentially regardless of the constraints or hardware breadth.

pjmlp 1389 days ago [-]
Apple also took a multi-year effort of designing a stable ABI for Swift as means to enable exactly this scenario.
lilyball 1389 days ago [-]
The stable ABI was so Apple could ship Swift in the system. The fact that it enabled binary dependencies was more of a freebie.
lilyball 1389 days ago [-]
Binary dependencies are really about supporting closed-source libraries. Because they’re in binary form it means there’s no ability to customize the build process, no build flags or optimization levels or anything else.

Apple also relies heavily on dynamic linking in general, so binary dependencies are likely going to dynamically link their own dependencies, thus removing a lot of the variability that would otherwise require recompilation.

pjmlp 1389 days ago [-]
If Rust is to be taken seriously as systems programming language, it needs to cater to a use case that is very important for a large set of C, C++ and Ada population.

Apple is doing this, because Swift is their next-gen systems programming language.

Swift supports static linking just fine as well.

Really, it is all a matter of which demographics Rust wants to be present.

And with Rust now being adopted by Microsoft and Google, I just see this need only increasing.

harikb 1389 days ago [-]
While I agree with your statement, the lack of binary linking has been a blessing in Go and Rust. The inability to give a binary "SDK" forces many companies to provide source (and in many cases open-source their library). I would find it very irritating if I can't navigate into the library source at least during debugging.
pjmlp 1389 days ago [-]
Go tooling supports binary dependencies, where the only source provided by the packages is the documentation for go doc.

It just looks like everything is source code when not taking the effort to read through all dependencies.

It doesn't force companies at all, only those that are comfortable shipping source libraries end up adopting such languages.

I used to work for a company that shipped encrypted Tcl source code and provided the necessary interpreter hooks to access the code in its encrypted form.

Nullabillity 1388 days ago [-]
> It doesn't force companies at all, only those that are comfortable shipping source libraries end up adopting such languages.

Yes, and the others are left behind and don't get to participate in (and infect) the ecosystem. Sounds like a win to me.

> I used to work for a company that shipped encrypted Tcl source code and provided the necessary interpreter hooks to access the code in its encrypted form.

Surely.. if the interpreter can decrypt the code then so can the user? Minification and obfuscation are of course still possible, but the whole encryption thing seems pointless.

pjmlp 1388 days ago [-]
Tools like IDA are always a possiblity, the large majority of customers are willing to go that far.

"Infect" the eco-system? That is not the way make business.

masklinn 1389 days ago [-]
> I'd be curious to learn why Cargo re-compiles from scratch instead of offering pre-compiled binaries as well. I guess part of it is related to different target platforms

There’s that but there’s also issues like ABI stability (and the lack thereof), compilation flags, hosting (infrastructure and its cost), distribution mechanisms, …

There’s an issue on cargo dating back to 2015 (#1139) but there’s a lot of efforts needed to think about the problem, then actually solve it.

pjmlp 1389 days ago [-]
That is the approach taken by most compiled languages, specially in commercial environments.

I think it is only a matter of time until such cargo gets its "Maven".

Incidently one of Swift announcements at WWDC was the support of binary packages in Swift Package Manager.

Elinvynia 1389 days ago [-]
With Rust you get safety at compile time while avoiding memory safety issues at runtime. The compile times are objectively longer compared to most other languages. But whether this trade-off is worth it for you depends a lot on your company's CI pipeline and the skill of the developers.
verdagon 1389 days ago [-]
I'm not sure safety has anything to do with long compile times, can you enlighten?
efnx 1389 days ago [-]
There’s a lot happening at compile time that less safe languages don’t do. Type checking and type erasure, etc
int_19h 1389 days ago [-]
As the article explains, this all is a relatively minor contribution to compile times.
pjmlp 1389 days ago [-]
Ada/SPARK does it.
verdagon 1389 days ago [-]
IIRC, languages like Java and Go compile quite fast, and they are even safer than Rust...
Gibbon1 1389 days ago [-]
I feel like the long compilation times is going to be a show stopper for smaller teams/companies where you can't afford the loss in productivity.
efnx 1389 days ago [-]
Anecdotally I can tell you it does not amount to a loss in productivity. Cargo caches built deps so the cost is really only paid once in a while and the advantages of rusts type system are a boon to productivity.
lmm 1389 days ago [-]
If productivity is important, you really can't afford memory corruption bugs. Those can easily take weeks to hunt down.
pdimitar 1388 days ago [-]
It can be slightly annoying -- compared to f.ex. OCaml with its lightning-fast compiler -- but in practice incremental recompilation is much faster than a compilation from scratch so it rarely irks me that badly.
Cyph0n 1389 days ago [-]
Just to add to other comments: Cargo feature flags are another reason why you need to build your dependencies.

Of course, if Cargo supported pre-compiled dependencies, I am guessing that it would be smart enough to only recompile the dependencies that are using non-default features.

Matthias247 1389 days ago [-]
In addition to just feature flags the fact that a lot of Rust code is generic and would only be compiled into binary form in the final application would likely prevent excessive use of precompiled artifacts.
alkonaut 1389 days ago [-]
> it had 92 dependencies, including transitive dependencies, and took 5 minutes and 38 seconds to build.

What is it that triggers such a "full" build though? Obviously in some CI scenarios you might start from scratch, but in an edit/compile cycle, you'd never be hitting those 5 minutes, correct?

Measter 1388 days ago [-]
Typically in an edit/compile cycle you'd only compile your dependencies once for each build type (check, debug, release). Unless you change a dependency's feature flag, the compiler will just re-use what's already been compiled.

If you clean your build folder, things will need to be rebuilt from scratch. Likewise if you change your compiler version.

alkonaut 1388 days ago [-]
It seems a CI system should be able to work the same way and cache the dependencies just like a local build.

If it does, then the long compile times are almost never encountered for neither developers nor CI. So are they really problematic?

sk0g 1389 days ago [-]
Go would have compiled an order of magnitude faster still. The compiler is less safe, and thorough than Rust's, but still.
efnx 1389 days ago [-]
You’re right, though after an initial compilation cargo does a good job caching and subsequent builds will often beat Go’s compile times.
sk0g 1389 days ago [-]
The backend service I'm using doesn't have cached CI yet, so the entire build takes ~20 seconds. The CircleCI free tier is good enough for now, and more time is spent pulling dependencies than building, anyway. The build pulls in heaps of dependencies, and the main codebase itself being around 20kloc. Subsequent local builds take around .2 seconds locally, so I'm very happy with it. Even working on a tiny Java/ Kotlin codebase recently made me miss the good compile times!

How hard is the caching to set up, especially in a CI setting?

efnx 1388 days ago [-]
It depends - at my work we use buildkite which is a “bring your own runners” build service, so caching is available by default, so long as the cache is outside of the project directory. For personal stuff I use GitHub and Gitlab - they both offer caching. GitHub’s offering is easier but Gitlabs is just as effective.

Cargo caches by default and you can specify the path that it stores artifacts. Most of the cache story is about what your chosen provider uses to specify build steps, etc

rmdashrfstar 1388 days ago [-]
Use sccache and an S3 bucket
ChrisSD 1389 days ago [-]
Rust supporter: Build times are comparable to C++.

Rust critic: Build times are comparable to C++!

lenkite 1389 days ago [-]
Not sure if this is really true. Our C++ project builds are quite fast. I personally find Rust cold compile time far longer than C++.
uluyol 1389 days ago [-]
I wonder whether C++ modules will give C++20 the edge here.
nindalf 1389 days ago [-]
I asked someone who worked on the C++ modules proposal and they said it might both help and hinder compile times.

On one hand, compile time would be lowered because compilation units are larger and less can be done in parallel. But in future, incremental builds probably become quicker because it’s easier to tell what’s actually changed and what that affects.

qppo 1389 days ago [-]
I have a rather large side project in Rust using cargo workspaces and compilation times do not bother me in the slightest, because I'm running incremental builds for everything.

On the other hand I tried introducing Rust for a small part of a larger ecosystem and the cold compile times were so bad we rewrote the functionality in C. It shaved minutes off our CI build times, which costs actual money.

asdkhadsj 1389 days ago [-]
> On the other hand I tried introducing Rust for a small part of a larger ecosystem and the cold compile times were so bad we rewrote the functionality in C. It shaved minutes off our CI build times, which costs actual money.

Yea, we have this issue (as a shop now using Rust for all of our backend).

I have a couple of hacks in place to cache the majority of the build thankfully, only needing to compile source code unless something changes. When our build cache works our builds take ~60s. When it doesn't, ~15m.

qppo 1389 days ago [-]
To be fair to Rust, this is certainly true of other languages. C++ builds can also be compile time hogs.

What did you wind up doing to cache your builds? I've tried a few different hacks but none have stuck.

asdkhadsj 1389 days ago [-]
Oh yea, I wasn't picking on Rust. If anything I tend to defend Rust haha.

As far as what we did to cache, nothing fancy - using Docker build layers. I add my Cargo files (lock/toml), include a stub source lib.rs or main.rs to make it build with a fake source, and then build the project.

This builds all the dependencies. It also builds a fake binary/lib for your project, so you need to strip that from the target directory. Something like `rm -rf target/your?project?name*` (I use ? to wildcard _ and -)

If you do that in one layer, your dependencies will be cached with that docker image. In the next layer you can add your source like normal, compile it, and you'll be set.

We lose our cache frequently though because we're not taking special care to centralize or persist the layer cache. We should, for sanity.

giovannibonetti 1389 days ago [-]
> We lose our cache frequently though because we're not taking special care to centralize or persist the layer cache. We should, for sanity.

Do you use digests (@sha256:...) in the Dockerfile source image (FROM ...) to ensure you are always using the same layer?

If not, that is probably the reason why your cache is falling so often.

asdkhadsj 1388 days ago [-]
Nope, but that shouldn't be required I'd think? It's not locally, for example - as long as each layers resulting hash has not changed the cache is reused. Likewise on CI, if I repeatedly build the caches never miss, it works great. It's more a problem of sometimes, after a days or a week, the layers seem to be gone. But never in-between your pushes to a PR, for example, those always seem to stay.

I think that's happening is our garbage collection on old docker layers is being too aggressive. But because it works so well between commits, pushes to CI and etc - I don't worry about it. The majority of the time I want a cache to work, it works. So it's been a low priority thing for me to fix haha.

edit: Oh, and I forgot, we may have CI jobs running on different machines. Which of course would also miss caches, since we're not persisting the layers on our registry. I'm not positive on this one though, since like I said it never seems to fail between commit pushes _(say to a PR during review, dev, etc)_. /shrug

pjmlp 1389 days ago [-]
Indeed, but unless you are doing some crazy meta-programming, you can keep reusing the same binary libraries, as inside the same company most projects tend to have similar build flags anyway.

Alternatively it is also quite common to use dynamic libraries, or stuff like COM, XPC.

jfkebwjsbx 1389 days ago [-]
C++ build times get high only if you start messing with the type system and ask for it to be recompiled every single time again and again.

Rust is slower overall, which is why people tend to complain. And if you start messing around like in C++, then you get even crazier times.

But in neither case it is a dealbreaker compared to other languages. Go proponents claim compilation speed is everything, which is suspicious. I do not need to run my code immediately if I am programming in C++ or Rust. And if I am really doing something that requires interactivity, I should be doing it another way, not recompiling every time...

qppo 1389 days ago [-]
I think it's fairly naive to say build times get long "only if you mess with the type system" whatever that means. It's pretty easy to tank your compile/edit/debug cycle just by adding or removing things to a header file. Maybe modules will improve things.

I've worked in C++ code bases with just a few 100k loc where one starts architecting the software to avoid long compile times. Think about how insane that is, you choose to write and structure code differently as punishment for the sin of writing new code. Not to improve the software performance or add new features.

The worst example of this is the pimpl pattern. You make th explicit choice to trade off compile times to hide everything behind a pointer dereference that is almost guaranteed to be opaque to the compiler, even after LTO, so the only "inlining" you may see is from the branch predictor on a user's machine. That's bonkers!

jfkebwjsbx 1389 days ago [-]
Of course compile times increase with code size, that is not what people are talking about when they say "X language compiles slowly". They are talking about time per code size.

Messing with the type system is using it for things you really should not in any reasonable project. For instance, some of the Boost libs with their overgeneralizations that 99% of users do not need.

ByteJockey 1389 days ago [-]
>"only if you mess with the type system" whatever that means

I think they're talking about the STL.

eloff 1389 days ago [-]
Boost.
Insanity 1389 days ago [-]
Having a fast feedback loop helps with staying in the flow. Compile times need to be short for this.
rmdashrfstar 1388 days ago [-]
That’s why you write unit and integration tests :) That’s how you get a fast feedback loop, instead of looking for it in compilation
jfkebwjsbx 1389 days ago [-]
What is "staying in the flow"?

Recompiling fast helps the most when learning to program, but not for actual applications with some complexity.

Many applications do not even have meaningful output by just running it, for instance they may take a long time to compute something meaningful.

pdimitar 1388 days ago [-]
> What is "staying in the flow"?

Run the code (with some debug logging statements, very likely), find a small mistake, make a few characters worth of correction, press your IDE's keyboard shortcut for recompilation and re-running.

If the last step is slow you can lose momentum and motivation to iterate quickly on the problem at hand.

Sure it doesn't apply to a lot of projects, that's true. But it's not charitable to claim it only applies when learning.

gameswithgo 1389 days ago [-]
it depends on the problem domain. when working on say a desktop app with graphical ui features it can be very useful to be able to change/experiment quickly.

with an n tier web application you wont often be able to do that anyway.

pjmlp 1389 days ago [-]
And this is why in its current state you wouldn't be seeing something like RustUI or Rust Playgrounds.
MaxBarraclough 1389 days ago [-]
How do the Mozilla folks cope with these build-time issues?
KwanEsq 1389 days ago [-]
1) Front-end engineers don't compile any Rust or C++ etc. code, instead they use artifact builds which download pre-built binary artifacts from Mozilla's continuous integration builds, and their local builds only "compile" the HTML/JS/CSS code.

2) They use sccache locally for caching binary builds artifacts.

3) In-office they use stuff like distributed compiles/a beefy compiler machine on the network rather than necessarily the one they are using

zelly 1389 days ago [-]
because, it's checks notes

  % curl -Lo mozilla.zip https://hg.mozilla.org/mozilla-central/archive/tip.zip
  % unzip mozilla.zip && cd mozilla-central-*
  % find . -type f \( -iname '*.cpp' -o -iname '*.cc' -o -iname '*.cxx' -o -iname '*.c' -o -iname '*.h' -o -iname '*.hpp' -o -iname '*.hxx' -o -iname '*.hh' \) | wc -l
  32347
  % find . -type f -name '*.rs' | wc -l
  7669
  % find . -type f -iname '*.js' | wc -l
  70101
...still mostly C and C++ and more JavaScript than Rust.
orf 1389 days ago [-]
Does that include any of the dependencies? [1]

1. https://hg.mozilla.org/mozilla-central/file/tip/Cargo.toml

gpm 1389 days ago [-]
Cargo.lock is probably the more useful file for listing dependencies

https://hg.mozilla.org/mozilla-central/file/tip/Cargo.lock

kzrdude 1389 days ago [-]
sscache and build servers, I believe
tick_tock_tick 1389 days ago [-]
Not a ton of Rust in firefox yet.
steveklabnik 1389 days ago [-]
2.6 million LOC according to https://4e6.github.io/firefox-lang-stats/ (this includes vendored deps, but that still gets built, of course)
antpls 1387 days ago [-]
Maybe that's a good opportunity to learn about Bazel remote cache server (although that also costs money and time to operate)
pjmlp 1389 days ago [-]
> I imagine the prior art is extensive, but a notable innovative responsive compiler is the Roselyn .NET compiler; and the concept of responsive compilers has recently been advanced significantly with the adoption of the Language Server Protocol. Both are Microsoft projects.

Some examples of prior art in the context of strongly typed compiled languages with compilers designed for interactive development from the get go.

Mesa/Cedar environment at Xerox PARC, and how Oberon and its descendants used to be integrated with the OS (Native Oberon and others)

Energize C++ and Visual Age for C++ version 4, although quite resource intensive for their time.

Eiffel, with its MELT VM on Eiffel Studio for development and AOT compilation system C and C++ compilers for deployment.

C++ Builder and Delphi environments, although still batch, provide a similar workflow.

barrkel 1389 days ago [-]
Delphi, when providing support to the editor, compiles the whole file containing the cursor but completely skips all codegen and in fact all function bodies (i.e. type checking, expression analysis etc.) that don't contain the cursor. The cursor is represented as a fake token, and the parser longjmps out with context when it's discovered; it resets the lexer context if it finds the cursor in the body of a function it's trying to skip.

It doesn't need to compile dependencies because the public parts of units are a serialized symbol table with full definitions, rather than simple object files.

pjmlp 1389 days ago [-]
Thanks for the overview, I need to start collecting your comments. :)
gameswithgo 1389 days ago [-]
a responsive compiler is different than a fast one. rust has a good responsive solution in rust-analyzer now for ide features.
pjmlp 1389 days ago [-]
With the examples I gave, the application is ready to execute at any given moment, rust-analyzer is a crutch not a solution.
omn1 1389 days ago [-]
Shameless plug from a fellow rustacean here. If anyone is looking for ways to improve compile times, I recently wrote an article with some tips: https://endler.dev/2020/rust-compile-times
rubyn00bie 1389 days ago [-]
Obviously just a band-aid for the symptom, and probably known by most, but FWIW sccache has been pretty freakin' great for me at keeping build times manageable: https://github.com/mozilla/sccache

And, LOL, just now finally read the readme, didn't even know I could archive the the cache over the network.... #foreverN00b that's gonna be awesome.

epage 1389 days ago [-]
For build scripts, I've switched to avoiding build-time codegen and instead doing development-time code-gen, commit the results in, and have CI verify that the committed results and generator are not out of sync.

This saves me building build-script dependencies, building build-script, and running running the build-script. In addition, by default, these are built and run in the same mode (debug/release) as the target binary, making build-scripts even slower when doing release builds.

Wrote https://github.com/crate-ci/codegenrs to help with this.

swsieber 1389 days ago [-]
I like this approach. While I don't use it for any Rust programs I'm working on, I do use it for certain java reflection based things.
epage 1389 days ago [-]
I'm working on trying to get my company to switch to this approach for our python monorepo.

It frustrates me that I've never seen a good code-gen story for python - Some do runtime code-gen which is hard to learn from, debug, and verify - Some do install-time code-gen, slowing down install and and making it hard to inspect and verify - Some check codegen in but without any story to ensure it does not drift from the generator

lukevp 1389 days ago [-]
What is “slow”? I mostly do JS and .NET, and I just started working through the rust hello world. I was shocked that I had an exe in around 0.2s. I haven’t worked on a large project so maybe this doesn’t scale, but of the 3 (at least for hello world which is fairly trivial) rust felt super fast. I saw mention of Roslyn so perhaps the issue is more around incremental compiles for IDE feedback? I didn’t see any issues with the VS Code language server (and I love the auto format on save, I set this up with prettier and I don’t want to go back to a place without auto formatters for everything. It’s great DX.)
tym0 1389 days ago [-]
Depends what you're building, I have what I would consider a small server app (Less than 10 JSON HTTP endpoints with some postgres), a dev incremental build takes 17.21s and a release build 7m 12s. A similar app in Go would probably take less than a second to build, they're obviously quite different languages but that doesn't feel that fast.
cptskippy 1389 days ago [-]
I'm looking at the pipeline for smallish .NET Core Web App and the 6 projects build vary from 4-25sec each to build and with all the scaffolding it takes just under 5 minutes start-to-finish to produce binaries.
xrisk 1389 days ago [-]
Is that because the libraries you link to are gigantic? Even so, the compiler shouldn’t be compiling those parts of the libraries that aren’t being used, right?
pjmlp 1389 days ago [-]
So far cargo doesn't do binary dependencies, so when you start a project, or do CI/CD, it compiles the all world from scratch.
chc 1389 days ago [-]
I'm not intimately familiar with the internals of rustc, but my impression was that everything gets compiled and dead code is later eliminated. In order to avoid compiling unnecessary stuff, you generally need to use feature flags (e.g. you can say "only compile this if the serialization feature is enabled").
tym0 1389 days ago [-]
Actix, Diesel, Serde and their dependencies are definitely most of build time.
zozbot234 1389 days ago [-]
> I just started working through the rust hello world. I was shocked that I had an exe in around 0.2s.

It's not everything. It's projects that use a lot of monomorphized generics, or compile-time macros, etc. And to be sure, this situation is improving over time - as new generics-related features are added, hopefully more of the redundant code that gets output when using these features will simply be cleaned up automatically, without adverse impact on compile times.

YorickPeterse 1389 days ago [-]
To illustrate, let's take the following VM code: https://gitlab.com/inko-lang/inko/-/tree/master/vm. This VM consists of 18 262 lines of code, including VM tests.

An incremental release build where nothing has changed (I just ran `touch src/lib.rs`) takes 25 seconds. Compiling the VM from scratch including all dependencies takes 1 minute and 28 seconds.

That's not too bad, but it could (and in my opinion should be) much better. Debug builds are usually faster to compile, but tend to run too slowly to be useful for anything but debugging some VM bug.

fsociety 1389 days ago [-]
I think it’s more of a question for compiling programs with LOC in the millions. The compile times for projects like LLVM, Chromium, and Tensorflow for example are disgusting.
dkarlovi 1389 days ago [-]
What's the rough scale of "disgusting" here?
chubot 1389 days ago [-]
One observation on LLVM

Building LLVM from source, I was told in another forum, took 26 minutes across 16 cores on a top-end machine (ie. several hours on one core). I guess I don't need to build LLVM(?)

https://old.reddit.com/r/ProgrammingLanguages/comments/hhxis...

oldmanhorton 1389 days ago [-]
Chromium will easily take 5 or 6 hours for a clean build on a good professional workstation (i7/i9/low end xeon, 32-64GB RAM, nvme ssd).
sgerenser 1388 days ago [-]
This guy did it in 53 minutes on a 16-core 3950x with a fast ssd: https://textslashplain.com/2020/02/02/my-new-chromium-build-...

Not fast by any means but big difference between 6 hours and 1.

floatboth 1389 days ago [-]
How?!

I can build LLVM, Rust and Firefox in less time on my R7 1700.

How is Chromium so bloated??

Bjartr 1389 days ago [-]
Is guess because Google devs have Google infrastructure to accelerate & paralellize builds in a distributed manner. Build time ends up being a pretty low priority to improve when the primary contributors to the project don't have a problem with it.
secondcoming 1389 days ago [-]
It took my 4-core i4790K ~4 hours to compile tensorflow. Part of the build process builds LLVM...
sangfroid_bio 1389 days ago [-]
Compilation speed comparable to template heavy C++, a large difference compared to Go for similarly sized projects
rapsey 1389 days ago [-]
Chromium 5h+
rurban 1389 days ago [-]
Like with Scala. Over 5 minutes for normal executable sizes.
staticassertion 1389 days ago [-]
builds for my 10kLOC project, which is a workspace. These are the commands run from the root of the workspace. We could clean up some deps, we use quite a few crates, and we use a number of proc macros as well. In general I'd say we're fairly "worst case" for compile time.

cargo clean; cargo build; 4m20

Commenting out two lines of code in one of the services + cargo build; 0m14s

cargo clean; cargo build --release 11m43s

change 2 lines, cargo build --release 0m31s

I don't find this egregious.

ChromeOS Linux, which is debian

Intel i7 4 core 32GB RAM

qppo 1389 days ago [-]
Pull in serde and run a non incremental release build
drej 1389 days ago [-]
This article made rounds recently: speeding up Rust compilation by replacing/removing dependencies. https://blog.kodewerx.org/2020/06/the-rust-compiler-isnt-slo...
dang 1389 days ago [-]
uvatbc 1389 days ago [-]
Shameless plug: We just started supporting rustc compiler caching and released it as part of our support for open source projects: https://crave.io/#opensource

We're showcasing the Libra project to start with and would be up for adding other projects that the community is interested in.

pdimitar 1388 days ago [-]
I'd love to know more, f.ex. you could download and compile the top 100 downloaded Rust crates, demonstrating compiled times with and without Crave.
strictfp 1389 days ago [-]
Incremental development in Rust isn't particularly slow; it's compiling all the dependencies which takes the most time, and when you make changes you only recompile your local crate, which typically is quite fast.
kd5bjo 1389 days ago [-]
The static linking section here feels a bit weak to me: if a Rust compilation fails, it usually happens before linking starts. On the other hand, if your program successfully compiles, do you really care whether the link time happens at the end of compilation or on program launch?

Unless run-time linking is faster, they both result in the same delay between the end of code generation and the first execution of your code. Static linking says you pay that cost once, in the compiler, instead of every program launch, of which there will be at least one.

chubot 1389 days ago [-]
The work is not equivalent, and that's stated in the article:

With dynamic linking the cost of linking is deferred until runtime, and parts of the linking process can be done lazily, or not at all, as functions are actually called

Anecdotally Google's build system Bazel makes good use of the same thing for tests. The article also mentions that issue:

That includes every time you rebuild to run a test.

I don't remember the details since it was long time ago, but dynamic linking is a huge win here because of the lazy linking. The first function call is slow but the rest are fast. Loading is faster because you never call most functions.

Rust seems to have a static dependency problem similar to Google's from 10 years ago (a big pyramid shape rather than using dependency inversion).

----

edit: related post about the Chrome/Ninja build:

http://neugierig.org/software/blog/2012/07/gyp-toc.html

Here's one cool hack. You can, by flipping a flag, instead build this tree of libraries as shared objects. The result isn't something you'd ship to users, but it speeds up linking time during development considerably as you're passing smaller subsets of the files to the linker at a time.

This leads to a cooler optimization. When you change one source file deep within the tree, naturally you need to rebuild its object file, then the library, and then rebuild any binary that depends on the library ...

So instead, when building a shared object we write out both the shared object and "table of contents" file (generated with readelf etc.) that lists all the functions exposed by the resulting library. Then make dependent binaries only rebuild when the table of contents changes.

int_19h 1389 days ago [-]
What I don't get is why this:

> rustc is notorious for throwing huge gobs of unoptimized LLVM IR at LLVM and expecting LLVM to optimize it all away.

is seen as a problem with Rust, rather than LLVM. Isn't the whole point of having something like LLVM to have a high-quality backend where all optimizations only have to be implemented once, and then they just light up for all the languages that target it?

johnlorentzson 1388 days ago [-]
Some things are best optimized by the actual compiler.
crazypython 1389 days ago [-]
D skips generating a high-level intermediate representation. (HIR) D has its own, mature version of Cranelift, the DMD backend, which generates fast codegen.

Furthermore, like Rust, D has a mature LLVM backend. When compiling D programs with DMD and LLVM in -O0, LLVM only took slightly longer.

Caitin_Chen 1389 days ago [-]
This series has 4 episodes. Episodes 1-3:

- Episode 1: The Rust Compilation Model Calamity https://pingcap.com/blog/rust-compilation-model-calamity

- Episode 2: Generics and Compile-Time in Rust https://pingcap.com/blog/generics-and-compile-time-in-rust

- Episode 3: Rust's Huge Compilation Units https://pingcap.com/blog/rust-huge-compilation-units

zamalek 1389 days ago [-]
> rustc is notorious for throwing huge gobs of unoptimized LLVM IR at LLVM and expecting LLVM to optimize it all away.

It might be a good idea to implement a cheap/fast optimization phase according to rustc assumptions, specifically for rustc, to clean up the mess. Keeping the rustc emit layer concise and clear should really be a goal, in my opinion. Obvious code is easier to audit.

efnx 1389 days ago [-]
But cargo caches well, and after that first compile subsequent compilations will take mere seconds.
floatboth 1389 days ago [-]
> In the same experiment I did to calculate the amount of build time spent in LLVM, Rust spent 11% of debug build time in the linker

Which linker?

ndesaulniers 1389 days ago [-]
This is a good question; LLD blows the doors off BFD and GOLD IME.
zelly 1389 days ago [-]
Bazel has rules_rust and supports distributed builds. It might be useful to switch to that instead of cargo.
maxdo 1389 days ago [-]
Is it scala slow ?
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 00:33:24 GMT+0000 (Coordinated Universal Time) with Vercel.