> I see that Jason actually made the pull request to have wireguard included in the kernel.
> Can I just once again state my love for it and hope it gets merged soon? Maybe the code isn't perfect, but I've skimmed it, and compared to the horrors that are OpenVPN and IPSec, it's a work of art.
Edit: fixed URL
I don't think Linus' comment was directed at the author, it was at the code. Whatever the reason, the code he took a dig at is messier than wireguard.
We're really at a point where people would prefer to discuss his tone rather than the content of what he says, even here.
Couldn't agree more, all of his software is really nicely designed and implementated. Moreover I had a chance of exchanging emails with him and he's a really nice person.
Wow. (For Wireguard performance and the link speed.)
10 Gbps link or just on the local network?
Yes, TUN is not 0-copy. Is there an equivalent that's as fast as the kernel in any other major OS? (I'm asking that honestly, I have no idea.) If there is, is in-kernel networking on that OS as fast as on Linux?
The Linux network stack is what it is, there have been recent improvements for it (e.g. DPDK), but I think it must not be so bad given it powers most of the Internet.
It's telling that xnu started out on Mach with a FreeBSD "personality", and grew into another shaggy monolith.
The L4 in your baseband or your enclave is not running anything resembling a general purpose operating system.
So I guess my argument is: sure, you could design and build VPNOS, the OS where VPNs are fast and never need to touch the kernel. But nobody wants that OS.
Also, QNX has never seen usage as a general purpose OS on a PC, even though I remember trying a QNX Photon Live CD years ago. I wonder what would work well, and what wouldn't. In particular, are there security issues related to the use of message passing for drivers...
It has. QNX 6.2.1 offered a full desktop environment. I ran it as my primary OS for three years (2003-2005) while working on a DARPA Grand Challenge vehicle. The vehicle itself ran QNX, and development was also on QNX. An early Firefox and Thunderbird both ran well. The Eclipse IDE ran. All the command-line GCC tools ran. It worked like a typical UNIX/Linux system, but with more consistent response. No swapping. I could run the real-time vehicle code while compiling or web browsing and the real-time code. That consistency in response time made QNX a nice desktop OS.
It disappeared on the desktop after Blackberry took it over and made QNX closed source again. (For several years, all the source was online. Then one day Blackberry took it down, with no warning.) All the open source projects then stopped supporting QNX. QNX development is now cross-compiled from Windows.
With a small microkernel with a good track record, there's no churn. There's no new kernel every week. The QNX kernel had an update once a year or so. This is a big win when it controls your nuclear reactor.
Why is not supporting hardware acceleration a feature? Or is the objection to something more specific having to do with currently-available accelerators?
> A very large majority of in-kernel crypto users (by number of call sites under a very brief survey, not by number of CPU cycles) just want to do some synchronous crypto on a buffer that is addressed by a regular pointer. Most of these users would be slowed down if they used any form of async crypto, since the CPU can complete the whole operation faster than it could plausibly initiate and complete anything asynchronous. And, right now, they suffer the full overhead of allocating a context (often with alloca!), looking up (or caching) some crypto API data structures, dispatching the operation, and cleaning up.
> So I think the right way to do it is to have directly callable functions like zinc uses and to have the fancy crypto API layer on top of them. So if you actually want async accelerated crypto with scatterlists or whatever, you can call into the fancy API, and the fancy API can dispatch to hardware or it can dispatch to the normal static API.
Once you have WireGuard working, creating tunnels is utterly trivial. For a toy implementation:
peer 0 with IPv4 address 184.108.40.206 # ip link add dev wg0 type wireguard # ip link list [see wg0] # wg genkey | tee privatekey | wg pubkey > publickey # mkdir wg # mv privatekey publickey ./wg/ # ip address add dev wg0 10.0.10.1 peer 10.0.10.2 # wg set wg0 listen-port 51820 private-key ~/wg/privatekey # ip link set wg0 up # wg interface: wg0 public key: 0GS...0U= private key: (hidden) listening port: 51820 # wg set wg0 peer IlC...QI= allowed-ips 0.0.0.0/0 endpoint 220.127.116.11:51820 peer 1 with IPv4 address 18.104.22.168 # ip link add dev wg0 type wireguard # ip link list [see wg0] # wg genkey | tee privatekey | wg pubkey > publickey # mkdir wg # mv privatekey publickey ./wg/ # ip address add dev wg0 10.0.10.2 peer 10.0.10.1 # wg set wg0 listen-port 51820 private-key ~/wg/privatekey # ip link set wg0 up # wg interface: wg0 public key: IlC...QI= private key: (hidden) listening port: 51820 # wg set wg0 peer 0GS...0U= allowed-ips 0.0.0.0/0 endpoint 22.214.171.124:51820
(generate new keys to manage, create new network interfaces, assign new IPs, run wireguard ...)
I would agree that this is relatively simple but only compared to the other mainstream options (namely, OpenVPN and IPSEC) but it is much, much more complicated than sshuttle which distinguishes itself by allowing you to use any ssh server as a VPN endpoint.
No server side software install is required - all you need on the endpoint is an ssh login.
But I do admit that the kernel stuff can be painful.
However, if it goes mainline, that will be moot.
should be sufficient to help you set it up.
Many people have very little choice when it comes to the ISP they use. By setting up a "cloud server", at least they get to decide who, i.e. which cloud provider, has access to their traffic -- and can switch between providers much easier and at any time they wish.
Does any major (or minor) cloud provide do that?
The main advantage of IPsec and OpenVPN is that they are either natively supported by all major OSs (desktop and mobile) or there are apps freely available for that purpose.
I only know TunSafe, which now has finally been open sourced. But it was still controversial software. So any alternative would be nice.
As a former fan of WireGuard, I would STRONGLY advise against using ANY WireGuard implementation and that includes the official one.
As it is now, there seems to be only one person in the world who believes that they have the knowledge to determine if a WireGuard implementation is secure or not, and it is the founder himself.
So much for the 4000 lines of code that was supposed to be easily audited and understandable by others.
Avoid WireGuard until there is an outside group that can review the protocol and implementations of the protocol. For as it is now, the founder accuses all third party implementations of being unsecure but without being able to state why (more than it feels he has something personal against them).
It all feels too immature.
For those who are after Windows clients, the WireGuard project will hopefully have one quite soon, and of course we're happy to work with interested Windows developers who are working on similar projects with a security-minded attitude.
I'm sure the subsequent replies to this message will have plenty of outcry, demands for details, misinformation, and accusations, to bait this into a long sprawling thread. I'd like to preemptively step out of that kind of mudslinging. But I do think it's important to warn users, hence the note above.
> Yet in spite of your to-date brazenness, I'm still willing to work with you if you'd like to turn things around. Shoot me an email if you'd like to talk about open sourcing this work and integrating with the community.
It's open source now, rather than full of ads or being sold.
In the mean time, I still have ocserv and openconnect on Ubuntu & Mac so I'm happy.
A security hole in WireGuard's wg-quick that many use to establish the connection is that it allows the .conf file to download and execute programs without asking the user, and this feature is enabled by default.
This is basically a good feature and allows admins to run custom software as soon as the connection has been established.
However, it allows an evil (or NSA-hooked) VPN provider to issue .conf files to infect the user's computer with malicious code because users of VPN services rarely review the .conf files.
TunSafe has the same feature but it is disabled by default and requires Admin privileges to enable it.
I like that TunSafe seems to have more restrictive security settings as default, though it may not be appreciated by hardcore users.
I know you pre-emptively opted out of backing up your accusations but I'm going to ask anyway because otherwise it just seems like you are spreading FUD. What are the standing security issues and interoperability issues? Also, how has the developer been adversarial in his position? I'd genuinely like to know.
Without any extra tuning I see much better speeds than OpenVPN - something like 20MB/s vs 5MB/s when transferring between EU and US locations.
This seems like a flawless approach. The Zinc approach seems to be preferred (by those involved) for simple software-only use cases, and the more complex use cases seem to be composed of operations which Zinc could implement.
It's good that they're not just going to plop the thing in there in a degraded state (with probably worse performance [and DoS resistance] than the current out of tree/dkms distributions of wireguard).
Yea, indeed, I'm really trying to get the mainline version to have the same performance and security characteristics as the out-of-tree module version.
(And after it's mainlined, the out-of-tree module will only exist as compatibility for older kernels, and I'll have some scripts to automatically extract a mainline kernel into a backport.)