Here are my recipes: https://github.com/pauldotknopf/darch-recipes
You can use either Ubuntu/Arch/Debian/VoidLinux.
My entire operating system is stored on DockerHub. I run "update-machine" from my terminal, grab a snack, come back, reboot, and my machine is updated.
My entire OS has a tmpfs overlay, meaning I can reak havok, and a simple reboot will wipe everything clean. I use "hooks" to mount disks at certain spots (/home, /var/lib/docker, etc) in the initramfs, before chroot.
Also, you can try it out really quickly in VM with a pre-built image: https://pknopf.com/post/2018-11-09-give-ubuntu-darch-a-quick...
I run the same exact image, bit for bit, natively on 3 machines I have (laptop, home and work). Together, with Darch and my dotfiles, my environment is consistent, wherever I go.
Using container permissions also give you much of the permission structure you'd be looking for in your OS, much like a mobile device. I'm honestly surprised someone hasn't put in the development efforts to create a truly modern distro like Arch in containers for desktop and mobile. I think Purism is working with wlroots for Wayland. I'm looking forward to trying this with SwayWM if I can find the time, money and partners to help me with it.
Also, check out Simula for some AR/VR concepts:
That's what Fedora Silverblue is all about https://silverblue.fedoraproject.org/
I'm not sure what will happen after IBM's acquisition of RedHat but as far as I remember the last announcement was that Silverblue will get the best bits of CoreOS (in turn acquired by RedHat) and Atomic Workstation.
Take a look at NixOS. I'm not sure about containers, but it has `nixos-build build-vm` which builds a VM disk image with your current system configuration.
Great minds think alike ;-)
Thank you will be checking that out with interest
I do something similar but with a VM instead of a Docker container and it works well. The one thing I like about using a VM is it runs full Ubuntu with an init system, so it's easier to run daemons like Samba. My VM shares its home directory so I can mount it on the host and share files across the barrier (I've always found VM "shared folder" implementations flaky). Files are less an issue with Docker, but you might want to run other daemons within the context of your dev environment. The Docker "one executable per container" idiom really falls apart here, and requires you to hack around it.
To keep things fresh, I just have a couple bash scripts that I run (one for some system packages, one for dev tools, one for dotfiles) on a new Ubuntu VM every few months. I'm sure it would be trivial to automate this with vagrant to even further streamline it but it's been good enough for me.
You can get that functionality without full VM overhead by using:
yes the idea of long running daemons on the workstation seems at at odds with docker - something to get my head round I guess
As for more grammar ... aspell should be next addition to the Dockerfile
You could get that with LXC/LXD containers .
The talk and slides at the end of that post are worth the time if this sort of thing is interesting to you.
It uses OSTree to manage state, and only allows mutability inside of the home directory and /var.
Overall I was very impressed. Another year or two of rounding out the typical use cases and it will make a fine immutable workstation.
It uses btrfs instead of OSTree. It only allows writes inside /home, /var, /tmp, /etc, ... and everything else is updated transactionally.
Configuration of my system is mostly easy when setting up (dnf install xyz, plus 2 config file tweaks in /etc). It's the config of everything in /home that's complex (gnome settings, my emacs, bash and git config etc.).
It's a trade-off. I like it because I want a system that always works and updates silently, and I don't make heavy customizations. It's probably not for you.
This is precisely why I built Darch (https://godarch.com). I wanted an immutable OS, but also to take full advantage of my hardware, completely native. Your images show in in grub and you can boot right into them.
Oooh Ice WM - a nice reminder :-)
It installs mate or lubuntu desktop inside an LXC container, allows access via x2go, and pre-downloads chrome remote desktop, which can be configured in less than a minute (run chrome, log in, open remote desktop, enable connections).
It was spawned out of my virtual builders (https://github.com/kstenerud/virtual-builders) project, to allow me to get my Ubuntu development environment installed, configured, and running, from a fresh install even, on any LXD capable machine, in short order.
Even if my dev box dies completely, I can be back up and running on another machine or hard drive within an hour. I can set up as many of these desktops as my machine has CPU and RAM.
I wonder if there's any plan on implementing the APIs that would allow docker server to run in WSL, thus abrogating the need for a VM? Edit: theres no evidence of any motion, but here's a uservoice for namespace, cgroups,etc support in WSL: https://wpdev.uservoice.com/forums/266908-command-prompt-con...
Edit2: Actually it looks like docker support is being worked on: https://github.com/Microsoft/WSL/issues/2291#issuecomment-43...
I've never seen a good explanation of why hypervisors can't co-exist on Windows. I'm sure there's a technical reason but if anyone has any articles that explain this I'm very interested!
unless you're fine with 32bit. In that case you can use any system you want at the same time (also on windows)
You wrote "There is a developer who (I think) works for Docker..."
I believe that you are referring to Jessie Frazelle.
It just came about out of annoyance with yet again trying to rebuild my personal workstation to even barebones level.
It's definitely in the "if you aren't embarrassed you launched too late" category
NixOS doesn’t sandbox apps by default (obviously, the user could run all their apps using containers/VMs/etc, but the same is possible on other distorts).
I think qubes or something like it will be the right way to go for safety in "the future" - but Inwoukd like a really simple way to define my qubes upfront- really really simple.
maybe they have it - not looked deep enough
Nix 2.0 had a bug which caused excessive memory use, but it's been fixed in 2.1: https://github.com/NixOS/nix/commit/2825e05d21ecabc8b8524836... https://github.com/NixOS/nix/commit/48662d151bdf4a38670897be...
I found it useful to separate OS configuration out from installing non-OS software (third-party)... it's basically a cue/cost that makes me prefer OS-level installations over "manual" installs.
...lots of hardcoded and ugly stuff in there, but I like the idea of it a lot.
My experience has been mixed, but good overall:
1) docker mounts local host's *.ssh info, and mounts "~/host" dir and "~/Git" dir (promotes some of the host FS to top-level directories) ... this is primarily around me coding / working with Git, so it's the right choice for me. It's nice for the auth story to mostly properly follow me around as I move from computer to computer (primary auth is on local host, this image expects my local auth is properly set up).
2) docker FS isolation is interesting... you can "apt install $foo" in different windows/instances and system level changes are effectively isolated and disposable. It allows near instantaneous install of certain packages (ie: "apt install ffmpeg") and the command will be gone after that particular session (unless I decide it's something I use often enough to be added to the docker recipe). Contrast this the cruft on a 5-year old home linux box which has 1000's of packages installed from running random tutorials from the net.
3) startup time is quick and roughly equivalent to booting an old x86 PC. No matter how much I screw up the (inside the docker) operating system, it's just a quick "reboot" to fix it back to the last known-good state
4) VNC/X11 is "meh" and more of a parlor trick. However, there's interesting use cases to making an "appliance docker image" ... Firefox works OK, and is maybe a good idea for carrying around a "paranoid browser", but it definitely feels a bit uncomfortable.
5) I know the non-OS install stuff (ie: heroku, rustc, etc) can likely be done in the initial Dockerfile step, it forces me to make sure the system stays in a consistent state. When something isn't pulled directly from Debian, it's likely that there's a more compelling story to make sure the software stays installable and up to date (ie: vim plugins, etc). I guess it feels a bit like a local version of homebrew recipes?
6) As a hack, I cobbled together a "build/bake" concept which locks down a particular set of the non-OS stuff as well. When "baked" the image doesn't really auto-update (tough to automate version control against certain random sources/installs), but I have some scripts which try to keep the base OS up-to-date, encouraging you to go through the "build-bake" cycle as time goes on, keeping the box "evergreen".
An example victory is when on a windows box, I can get "real" vim, bash, etc. and operate on windows files from within linux (something that WSL can't officially do).
On the mac-side, I can get "sort -R/--random-sort" when processing random data, as well as the ability to quickly pulldown ffmpeg, or imagemagick... again in a "disposable" environment, without seriously jeopardizing my OSX install and building up cruft.
not sure I get 5/6 - will dive in later (just decided this project needs a roadmap)
apt get update && apt get upgrade
I have very high confidence that debian/apt is "immutably available and consistently managed", but many of the non-free pieces of software (corporate-source or "not yet standardized, maintained, and included in debian") can/could require special attention, especially regarding updates.
build-env.sh - the operating system, apt-only, "just a linux box", download and install the heroku tooling b/c I often need it, but it's not in the main debian package pool.
bake-env.sh - "geeze I hate waiting for heroku, rust, and random vim plugins to download... let me freeze this moment in time" ... apt-get update still works and will keep O.S. up to date, but heroku, rust, vim plugins, etc won't "auto-install / auto-update" b/c they don't have properly managed versions or an install/update cycle like the rest of the o.s.
So you have a set of dotfiles, on your url, and if bake-env, those are part of the docker build - and presumably the dotfiles do the rust installs etc, so your docker is ready with rust ecosystem installed (not a rust user so unclear) and then you can happily ignore it till you need to upgrade some rust package ?
Duuuude I bet you're a blast at parties in your mom's basement. we should totally hang out sometime.