NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
REST Servers in Go: Part 1 – standard library (eli.thegreenplace.net)
0xbkt 1195 days ago [-]
I would probably go with gRPC + grpc-gateway[1] instead. Declaring your services and models in proto files, annotating your services with google.api.http to help grpc-gateway scaffold your HTTP base. Then just implement your services from the interface generated by grpc-go. You can even register your gRPC services to grpc-gateway without actually bringing up a gRPC server. You finally end up having your exact data models injected to your service handlers. Thus, you don't have to repeat yourself preparing the groundwork to call into your services.

This is mostly the way today services in big OSS projects are exposed outside for consumption in a RESTful style. One exception I know of is sourcegraph/sourcegraph.

[1] https://github.com/grpc-ecosystem/grpc-gateway#usage

hardwaresofton 1195 days ago [-]
It feels like an ugly hack that grpc-gateway sits on top of gRPC to support HTTP/1 when gRPC could have been written to support HTTP/1 and HTTP/2 from the beginning. Roughly, gRPC ~= protobuf (serialization & schema enforcement) + HTTP/2 + non-standard HTTP/2 (trailers, etc), and it is hard to believe that just as they standardized the non-standard bits, they couldn't have found a way to work over HTTP/1 (or HTTP/1.1 at least) with the flexibility available there.

gRPC feels to me like a violation/break in expectations of how the layers of abstraction are supposed to work -- gRPC works over HTTP/2, and it's weird that it translates a lower level (HTTP/1) at a higher level (gRPC) to re-encode it into something at the lower level. An appropriately robust protocol would simply support both lower levels of abstraction if it wanted to (and they were widespread in daily use), right?

The only places I feel like I see this kind of layer violations are in lower level networking and it generally just makes everything worse and more complicated there. Of course, I know that gRPC + gateway is actually not a huge deal in practice -- you've got reverse proxies like envoy that will do it for you automatically[0], but it just... doesn't sit great. The benefits of gRPC are not to be sneezed at (better performance, strict typing at the protocol level, schema enforcement, bidirectional streaming, etc), but it feels like it could have accomplished a lot of those goals without throwing out HTTP/1.1 completely (and then that machinery could have been reused to support HTTP/3).

[0]: https://www.envoyproxy.io/docs/envoy/latest/configuration/ht...

anderspitman 1194 days ago [-]
IMO your bidi streaming layer should should be entirely transport- and data- agnostic. You can build those in a layer up if you actually need them. See rSocket[0] or my own omnistreams[1].

[0]: https://rsocket.io/

[1]: https://github.com/omnistreams

anderspitman 1195 days ago [-]
These things can be fun to play with, but my experience is most projects don't need them. I would rather spend my time implementing functionality than regenerating and recompiling code from proto files. The large projects you mention often have something most projects don't: a well-defined data model and interface that has been heavily iterated and evolved over time. At that point things like backwards compatibility, auto docs, and performance become more important, and gRPC can be more useful.

You only get a certain number of complexity tokens and IMO it's not worth spending any here.

ammanley 1194 days ago [-]
Seconding this. Our team actually decided to implement gRPC + gRPC-gateway in a green-field project at work, as part of an initiative to "PoC" that it could be done and bring us lots of benefits. We got auto-gen'd docs, clients, all the works that gRPC + protobuf in Golang promises.

Between a few breaking upstream changes in grpc-gateway (iirc), learning a new framework vs the stdlib + chi router setup that had already been proven in the greater team, feature shipment dropped through the floor. Couple that with the fact that, as stated, these benefits come with well-iterated data models; conversely, our _not_ well-established data models proved to be a constant PITA when having to regenerate this file and that file. No one used the auto-gen'd docs because they were constantly changing anyway. And integrating with our CI/CD pipeline was a nightmare.

Between providing value with a team's previously proven tools and spending innovation tokens, I'd definitely suggest weighing carefully if you need this to be one of them (bi-directional streaming and a well-established project seem like good candidates though).

whoisjohnkid 1195 days ago [-]
Have you looked into goa design[1]? You define your services with a contract via their DSL and it generates all code for you as well; has some nice validation options out of the box. - similar to grpc gateway you just need to conform to the generated code’s service interface.

I believe it supports, http, websocket and grpc.

[1]https://github.com/goadesign/goa

yolo42 1195 days ago [-]
This is exactly what I started using in my new open source project (not announced yet) and the experience has been amazing. I was wary about it in the beginning but it quickly became clear that this is the easiest and probably the fastest way to get this job done. The community and tooling has come far ahead.
bincyber 1194 days ago [-]
Can you expand on how you can register gRPC services to grpc-gateway without needing to run a gRPC server?
0xbkt 1194 days ago [-]
`protoc-gen-grpc-gateway` generates a few public functions which you can use to bind your service to grpc-gateway. You can, - pass a gRPC client instance, - pass a gRPC connection, - pass a gRPC endpoint in string format, - pass a gRPC service instance implementing the service interface generated by `protoc-gen-go-grpc`.

These are meant to fit grpc-gateway to scenarios where gRPC server might be implemented in another programming language, or accepting clients at a TCP port or another kind of a socket, possibly under a different network realm than grpc-gateway. Pretty much like a reverse proxy, but can also be integrated with services in the same code level, escaping the overhead of a transport layer.

anderspitman 1195 days ago [-]
I think I've managed to get by with less dependencies in Go than any other language. It somehow walks the line between JavaScript leftpad and Python "stdlib is where modules go to die".

I don't think there's been a single instance where I've thought "why can't stdlib do this?" nor "why the heck is this in stdlib?"

SergeAx 1195 days ago [-]
There is an almost (?) official set of Go Proverbs: https://go-proverbs.github.io/ One of them is "A little copy is better than a little dependency". I like that one very much.
laverya 1195 days ago [-]
> I don't think there's been a single instance where I've thought "why can't stdlib do this?"

I've had this a few times, most recently with "how do I add this data file to my binary". At least that one made it to master now, and will be in 1.16!

Another gripe is the lack of a proper parallel safe map (no, map[interface{}]interface{} like sync.Map is just not acceptable) which would be a godsend and should honestly just be the normal map implementation. Maybe eventually with generics... (This is more of a language gripe than a stdlib gripe, though, and I'm sure that once generics come out we'll see a sync.Map that doesn't use interface {})

rad_gruchalski 1195 days ago [-]
> I've had this a few times, most recently with "how do I add this data file to my binary". At least that one made it to master now, and will be in 1.16!

And before 1.16, there is statik: https://github.com/rakyll/statik. Creates an embeddable file system from files or directories. It’s awesome for packaging web front ends into binaries.

laverya 1195 days ago [-]
Before 1.16, there's a whole list [0], though I've personally used go-bindata the most. I've yet to try statik, though I doubt I'll ever have a reason to now!

0: https://go.googlesource.com/proposal/+/master/design/draft-e...

anderspitman 1195 days ago [-]
How does statik compare to rice, which is what I had assumed everyone was using: https://github.com/GeertJohan/go.rice
anderspitman 1195 days ago [-]
> I've had this a few times, most recently with "how do I add this data file to my binary"

Ok fair enough, that one bit me too, but honestly I only felt cheated because Rust makes it so easy. `include_bytes!` was like cheating the first time I used it.

swirepe 1195 days ago [-]
>I've had this a few times, most recently with "how do I add this data file to my binary". At least that one made it to master now, and will be in 1.16!

Wait, how?? I've done some unholy things.

laverya 1195 days ago [-]
The best example for the new '//go:embed' directive I've seen so far is this:

  package main
  
  import (
      "embed"
      "net/http"
  )
  
  //go:embed assets/*
  var assets embed.FS
  
  func main() {
      fs := http.FileServer(http.FS(assets))
      http.ListenAndServe(":8080", fs)
  }
For the next month, the list of options here [0] will have to suffice.

0: https://go.googlesource.com/proposal/+/master/design/draft-e...

Ingon 1195 days ago [-]
Or you can just use the 1.16beta compiler
swirepe 1195 days ago [-]
That's beautiful.
1195 days ago [-]
erichanson 1195 days ago [-]
This is so frickin' cool.
hit8run 1195 days ago [-]
I’ve written a full stack web app with payment integration multi tenancy etc. in plain go. I learned a lot on that journey and what I learned made me value full stack frameworks like rails again. If you want to build something competitive you will need to write lots of things in plain go that you wouldn’t need to bother with when using rails or something similar with sane conventions.
heofizzy 1195 days ago [-]
+1 on that. I recently had to write a small web app that has to do some data processing and interact with database on request. We chose go for this because it was faster than php for this one particular data intensive task. I liked working with go, but frameworks like rails, laravel or django have a huge advantage in terms of developer productivity when compared to go.
lenartowski 1195 days ago [-]
Out of curiosity, is your app shared on github?
ankurpatel 1195 days ago [-]
Looking at such articles makes me feel we are going back in time rather than improving efficiencies for developers to build RESTful server. If you look at Ruby on Rails you can build the server shown here in one min that is scalable and backed with database. I know people will complain about speed of execution of language and framework but do you really care if you are not expecting Google like traffic.
skjfdoslifjeifj 1195 days ago [-]
He could build it in Go much faster just by using a couple of additional libraries but he's purposely limiting himself to just the standard library for the purpose of these articles.

> I know people will complain about speed of execution of language and framework but do you really care if you are not expecting Google like traffic.

If you don't care that's fine, but there are lots of us that do. This is subjective, but I find the memory use and overall performance of Rails/Django/Spring Boot applications to be completely unacceptable and there are far too many breaking changes.

ptr 1195 days ago [-]
It doesn’t make sense to say Rails etc are “Completely unacceptable” without giving a context. How can it be subjective? Memory is also rather arbitrary, why focus on that and not user friendliness? Or even GPU or cache efficiency? “Breaking changes” might not matter either if what you’re writing is a one-off.
lenartowski 1195 days ago [-]
What languages/framework do you use when performance/memory usage is key?
theshrike79 1195 days ago [-]
Not OP, but I'd go with Go or a fresh version of .NET Core depending on where it would need to integrate to.

If we're talking real time sub-millisecond performance, then Rust.

ithrow 1195 days ago [-]
Some prefer clear, flexible easy to debug code instead of coding to 15 layers of abstraction. For an HTTP API you have to discard more than half of Rails anyway.
qudat 1195 days ago [-]
I agree so hard on this. Anyone recommending rails as some sort of elevation of a restful API is puzzling to me. It’s easy to get started for people that don’t want to code. Instead they want to spend all of their time memorizing the cascade of configuration objects where you have to learn the exact phrase to get rails to do what you want it to.

I know this is all opinion and I have colleagues who are excellent engineers that prefer rails, but it goes against everything I enjoy about software development.

scrose 1195 days ago [-]
I wouldn’t call Rails ‘easy to get started’. The learning curve is steep, but once you get over it, you can hop into nearly any Rails application and immediately be able to contribute. I prefer Rails for most things work related for that very reason.

Having worked a couple places where Python or Go microservices were hyped up, but every installation, DB schema, and folder structure seemed to be built with a different idea in mind, and or debated, I have a strong appreciation for right standards around convention. Especially when I don’t have to get into debates like whether DB table names should be plural or singular.

On the side I really do enjoy working with Go though, and wouldn’t hesitate to use it if it seemed like the right tool for the moment.

pm90 1195 days ago [-]
I suspect you will see a lot more of rails (and rails like frameworks) being used just because they are so simple to use and beginners can use it without understanding too deeply. Having more people who can understand/build/fix will always win out, and rails is what most coding bootcamp teach. So there’s just going to be too many folks who will choose rails over more suitable languages.

I’m not super convinced that’s a bad thing though. The software industry has always suffered a perennial shortage of engineers. What I predict will happen is that an ecosystem of tools will spring up around rails and there will be serious investment in improving performance rather than the framework being abandoned.

Remember how kubernetes changed infrastructure development to managing config files? It’s what I see happening to most areas of software development. The more experienced software engineers will then be responsible for optimizing/bug fixing/ scaling.

RobertKerans 1195 days ago [-]
Putting aside that your post kinda reads as if it were written ten years ago (most bootcamps do not teach Rails and it's vanishingly unlikely it'll reach the position it was ten years ago: cf JavaScript), I take some issue with

> beginners can use it without understanding too deeply...So there’s just going to be too many folks who will choose rails over more suitable languages.

It's not about "beginners" or "more suitable languages". You need an ability to build highly specialised software and you need general frameworks that will not be optimised for {specialised thing} but can do most of the things that you need for a certain task without having to hand write everything. Without the latter, all that happens is constant yak shaving and reinventions of the wheel. Rails (for example) is highly suitable for many things, getting clever points for not using it is great, but it's not very practical.

morelisp 1195 days ago [-]
A lot of bootcamps in Europe are still teaching RoR. (I doubt "most", but I also doubt "most" for any single language once you include JS for front-end/"full stack", Java for "back end", and Python for ML/DS focus.)

Bootcamps sell themselves based on employment placement, not cool technology or deep knowledge. In this regard RoR, Java, and even PHP are going to be bootcamp milch cows for years to come.

hardwaresofton 1195 days ago [-]
What you're pointing to is the need for better abstractions and Go is not the language for that (it will be more-so when generics arrive).

There is a language that has faster-than-go speed and better abstractions, but HN seems to be somewhat decided on whether they think it's awesome or terrible, and there was recently an blog post on front-page about how it was bad for APIs (which I heavily disagree with but I am biased so I opted not to comment on).

Go definitely does have some other web frameworks that make work like this simpler, but on the other hand, people sometimes shy away from those because the stdlib is more universal.

franklyt 1195 days ago [-]
Are you talking about Haskell? That’s a hard sell.
hardwaresofton 1195 days ago [-]
Like others posted, I definitely meant rust, if we're talking compiled, compile time checked systems languages -- there are only so many recent entries with something new to say in the space.

If we're talking just pure abstraction I think there are a lot of other choice that could have delivered similar improvements in performance for an IO-dependent workload, with better methods of abstraction, ecosystem, and safety -- that choice for me would be Typescript.

[EDIT] Also to note, even as a Haskell zealot I'm not crazy enough to suggest someone choose Haskell as an alternative where Go would have been good enough. I have enough experience with business needs to know that the purity/safety/whatever other benefits of haskell just aren't worth the lack of ecosystem, difficulty in finding developers, and hit to developer speed. Haskell is too far on the spectrum (on various axes) to be the right choice most of the time, and not enough companies have shared how they've outperformed with it to even start the conversation. Haskell is like the mercedes of programming languages -- airbags show up there first, but regular cars get them eventually.

scns 1195 days ago [-]
franklyt 1195 days ago [-]
I was hoping even less that was the case, feels like a very out of touch comment, but I’m willing to listen.
hardwaresofton 1195 days ago [-]
I don't think I was out of touch (which I guess is always how it goes). The value propositions of Golang and Rust are pretty well understood at this point, and I think that if you want abstraction power the choice between them is clear.

Rust is hard, but it's not harder than Haskell, and at a high level of abstraction it can be simpler than Golang has to offer in the stdlib, in the happy path with better results, and more safety. Again, this is the happy path case, but it's not impossible, just hard/unlikely -- Golang on the other hand can never achieve this level of simple interfaces built on abstraction because of the goals and design choices the language has made.

The original comment is this:

> Looking at such articles makes me feel we are going back in time rather than improving efficiencies for developers to build RESTful server. If you look at Ruby on Rails you can build the server shown here in one min that is scalable and backed with database. I know people will complain about speed of execution of language and framework but do you really care if you are not expecting Google like traffic.

My point was that Rust gives you the tools to write a rails/sinatra (at a glance) library that in the happy/simple path (which most regular CRUD backends are) can be simpler than Golang because the abstractions to make it simple are there, and speed will just about always be the fastest possible relative to quality of underlying code. Golang can (and does) provide similar, but it is always a step behind (on purpose) on the abstraction front. If you're going to provide a carefully crafted interface that is very easy to use, it matters less that rust can have a really high barrier to entry (rails is valuable because you can be productive without being a ruby expert).

franklyt 1194 days ago [-]
Ah, I see your point. But it could just as easily be made in nearly any language, the difference at the level of “is abstract-able” is meters rather than kilometers.

The problem is that as soon as you need something more than a command line call the engineering burden explodes relative to languages that natively support greater abstraction at the lexical and logical level.

hardwaresofton 1194 days ago [-]
> Ah, I see your point. But it could just as easily be made in nearly any language, the difference at the level of “is abstract-able” is meters rather than kilometers.

True, most languages could do it, but there are some hard stops to how easy it is to abstract in golang, and performance also knocks some languages out.

> The problem is that as soon as you need something more than a command line call the engineering burden explodes relative to languages that natively support greater abstraction at the lexical and logical level.

Agreed, once you're off the happy path things can be many times more painful in Rust than Go. I personally think Go will replace Java in most enterprise-y software shops within 5-10 years. Unless there's a specific reason to use the JVM, Go is more than good enough and it's goals align with industry very well.

franklyt 1194 days ago [-]
I think go would be huge if they could ease their pain points. The average developer isn’t interested in why generics are actually not necessary, or “here’s how you can still use generics” stuff. Even if they’re wrong, you still need to work in that reality.
RobertKerans 1195 days ago [-]
Rust I think, there was a blog post on how it is problematic as a web dev platform as opposed to lower level tasks (where it shines).
geodel 1195 days ago [-]
I think most people are expecting zero traffic so they do not need to waste even one min to write any REST server.
pjmlp 1195 days ago [-]
That is the whole ethos of Go community, at least they had something modern like a GC from the get go.
morelisp 1194 days ago [-]
> I know people will complain about speed of execution of language and framework but do you really care if you are not expecting Google like traffic.

It's not about whether you expect Google-like traffic, but about whether you expect a Google-like ratio of traffic/compute to capacity. My day job doesn't deal with "Google-like" traffic (maybe one Google product's worth of traffic), but we also only have about 10 servers to ingest what we do get. My current side project I expect to deal with even fewer orders of magnitude but I also want to be able to run it on a single server to keep costs and operational overhead low.

qudat 1195 days ago [-]
“Setting up a server” has to happen only once. I’d rather spend a couple days setting something up that I have complete control over rather than using a one-line setup solution that I’ll have to rip out months later.
friseurtermin 1195 days ago [-]
That's literally the point of this article:

> There are strong opinions both for and against using frameworks. My goal in these posts is to examine the issue objectively from several angles.

samuelroth 1195 days ago [-]
Nice article! This is an interesting approach, much less likely to make Go devs' blood boil over unnecessary libraries.

My only question is why the server / HTTP handlers have to deal with the Mutex. That seems like a "leak" from the `TaskStore` abstraction, which otherwise I really like. (Thank you for not using channels in that interface!)

jrockway 1195 days ago [-]
I think it's necessary to leak the details of the mutex until you have some sort of transaction object to abstract that away. In a concurrent workload, these two things are different:

   store.Lock()
   store.WriteKey("foo", "bar")
   x := store.ReadKey("foo")
   store.Unlock()
   // x is always "bar"
And:

   store.Lock()
   store.WriteKey("foo", "bar")
   store.Unlock()

   store.Lock()
   x := store.ReadKey("foo")
   store.Unlock()
   // x could be whatever another goroutine set "foo" to, not the "bar" that you just wrote.
In a more complicated app, you'll have library that acts as the datastore, with transaction objects that abstract away the actual mutex (which will be something more complicated):

   var x string
   err := db.DoTx(func(tx *Tx) {
     tx.Write("foo", "bar")
     x = tx.Read("foo")
   })
   if err != nil { ... }
   // what x is depends on the details of your database; maybe you're running at "read uncommitted", maybe you're running at "serializable".
But, even in the simple examples, it's worth thinking about the difference between lock { write; read } and lock { write }; lock { read }.
alchermd 1195 days ago [-]
I'm an API developer working with Python and Django. I did dabble with Golang for quite some time, but I just can't seem to justify the effort (in terms of lines of codes and static typing) of writing a ReST API with Go when I can build a similar one with Django and co (DRF, Swagger, etc).

Can someone chime in? There must be an obvious advantage that I might be missing.

lmarcos 1195 days ago [-]
I guess it's subjective. For me the clear advantage of building HTTP endpoints using Go (framework-less) over DRF, Swagger and others is: no magic, no annotations, no dependencies; I know exactly what's happening and I can build a tailored system that is adjusted to my needs. This leads to very performant systems that are highly independent of the current trends and, most importantly, to a high degree of ownership of the systems you are building.

The act of writing these kind of systems is IMHO, somehow, liberating.

mhh__ 1195 days ago [-]
Static typing buys you correctness now. Go isn't really a poster child of language design, to be blunt - but it's better to find out now rather than in production.

Compiled code is also significantly faster in many cases.

volkk 1195 days ago [-]
i see your point and i know OP asked about why Go instead of python, but having a statically typed language isn't limited to only Go. you could replace everything you wrote with Java/C++/C# and still have a similar answer (minus language design i guess which is a different argument)
ascotan 1195 days ago [-]
1. simpler architecture - python typically requires multiple systems to deal with concurrency (ioloops, celery, etc)

2. execution speed - Django's ORM comes to mind

3. concurrency

That being said there are alot of great features with python web frameworks that come out of the box. You need to a lightweight project like gin/echo or something to get these features. Naked net/http is sorta like the 'erector set' of web frameworks.

earthboundkid 1195 days ago [-]
Every time I use DRF, I end up making some huge and horrible abstraction that I hate after its creation, like Dr. Frankenstein. So far, this has not happened to me with Go. I don’t know if the problem is just me or what, but I find Go easier to just define a data type, grab some JSON from the request, send some other JSON back without making myself crazy.
alchermd 1195 days ago [-]
Interesting. Can you give an example of a "huge and horrible" abstraction that stems from your usage of DRF?

I remember having the same sentiment a few months back, but investing a considerable amount of time planning the structure of serializers and sticking to DRF's patterns did reduce the amount of "fighting" that I need to do to make my API work as expected.

zemo 1195 days ago [-]
running python servers is annoying, you have to have a menagerie of stupid little parts and things to get it all to fit together. I haven't done this in years but I always wound up with some mess of virtual envs, pip, gunicorn, nginx proxying, something to start the services, and that's _before_ writing any of my own code. With Go I just compile a static binary, rsync it to a server and turn it on and call it a day. The only other part I often use is nginx as a reverse proxy because it's easier to harden. Way easier to operate and way easier to not break once your project gets beyond a few kloc.
bartvk 1195 days ago [-]
> With Go I just compile a static binary, rsync it to a server and turn it on

With "turning it on", you mean you write a systemd configuration file and start it using systemctl, right? Just curious how people do this stuff nowadays.

zemo 1195 days ago [-]
usually for stores like this I just do this:

    type Store interface {
        GetTask(*Task) error
    }
instead of having "GetTaskByID" and "GetTaskByTag" and whatnot. Then in the caller you just do this:

    task := store.Task{ID: 5}
    if err := db.GetTask(&task); err != nil {
        // wahtever
    }
^ that gets the task by ID

    task := store.Task{Tag: "foo"}
    if err := db.GetTask(&task); err != nil {
        // wahtever
    }
^ that gets the task by tag.
hellcow 1195 days ago [-]
The downside to this approach is that you're unable to know what fields you can use in your query without reading the GetTask function, and changes to the GetTask function can silently break all callers, since it's now a runtime error.
zemo 1194 days ago [-]
tbh I've been using this method for years and the first one has never been an issue, because practically speaking you should only expect to give a single struct with a field or two filled in if that field or combination of fields is unique. You probably need that level of domain knowledge about what you're working on elsewhere anyway, so it has never been a problem.

I mean ... for the second problem that's broadly true of making any changes to your data access layer since by definition your Go compiler is not going to, for example, check the validity of a SQL query. So ... yes? but that's not unique to this approach, that's true generally.

fpopa 1195 days ago [-]
This makes sense, did you implement this alongside grpc / protobuf?

I'm curios about the way you handled zero values, field masks could be a solution, but I think it would get bloaty.

zemo 1194 days ago [-]

    func (db *actualStoreImplementation) GetTask(t *Task) error {
        if (t.ID != 0) {
            // query by ID, mutate the parameter, return nil
        }
        if (t.Tag != "") {
            // query by tag, mutate the parameter, return nil
        }
        return ErrWhatever
    }
usually I have some other package that defines all of the types that can appear on the wire (which I often call `wire` because `proto` is taken by protobuf), define some exported interface in that package with an unexported method so that no other packages can define new types for that interface, and then have a method on my db structs that returns the wire types, like this:

    func (t Task) Public() wire.Value {
        return wire.Task{
            // explicitly generate what you want
        }
    }
mcdoker18 1195 days ago [-]
I prefer the same approach, but I'd create a separated struct, such as TaskFilter, and using a pointer because zero ID or empty tag can be valid values.
tekstar 1195 days ago [-]
Nice demo. A few things you could add to make this more realistic to something that gets shipped:

-CORS support. Deploy this to a non-local domain and try to reach it from a web browser and it will fail. I like https://github.com/rs/cors. I had rolled my own but then moved to that library.

- input validation. I like go-playground/validator.

The other big issue is the locking by hand around the task store. In reality usually there would be a database to handle concurrent read/writes. I use SQLite in production. I know this is just a demo and you want to use just stdlib, but serializing all data access is sort of unacceptable as a solution in a concurrent language like Go. When I'm not handling concurrency with SQLite I like to implement The actor pattern, having a persistent goroutine listen and respond to "taskstore" requests via channels.

permille42 1195 days ago [-]
You are implying that "realistic" uses would be using multiple domains and have browser calls between them. Why would that be?
tekstar 1195 days ago [-]
I guess I have a deeper point to make, which is.. if you're going to compare golang web libraries it's the details that are going to end up making your decision. Superficial comparisons will be misleading. Implement auth, CSRF, a "real ip" Middleware, request logging, graceful restarts, yes CORS, and then do a few REST endpoints with it to see how much boilerplate VS useful code you have to write.

Maybe this is just my opinion as I started a significant golang app with just vanilla stdlib and then added each of these complexities in turn, before switching to libraries to solve most of the boilerplate better than my hand-crafted patterns

permille42 1195 days ago [-]
I opened the article to begin with to see if it contained some argument for why using the standard library instead of a framework makes sense.

It contained no explanation or reasoning to that end. It is purely what it is titled as: An explanation of how to do it with the standard library.

In that sense I don't think there is anything wrong with it. It doesn't make any claims; it simply provides the information it said it would.

Also, one of the main selling points of Golang is that it aims to deliver functionality with minimal additional cruft/add ons. Many of the standard library things are this way and don't provide the "extra stuff" you are desiring.

You mention auth, but many of the frameworks for Golang don't even implement auth and merely provided the ability to add auth if desired.

Logging is a debatable topic because the "standard" logging method is very inadequate, and so each of the various frameworks has to just choose a random logging system to use. ( or several )

While some frameworks do offer the ability to restart without killing active connections / requests, it is a more generalized issue with any service in Golang. It is also best addressed by writing your service in a scalable fashion and doing a rolling upgrade. Stop sending traffic to an instance, wait till current requests are done, then kill it. It doesn't need to be addressed within the service itself.

cwackerfuss 1195 days ago [-]
This is common. Web apps frequently fetch data from other domains, whether internal or third party.
permille42 1195 days ago [-]
Agreed, but a common extension does not mean it is required, nor that it should be demanded that an example of making a REST service needs to show how to do it.

The example is simply the beginning. It can be extended as needed depending on your use case. This is how most things Golang work. You start with the basics and add what you need on a case by case basis.

If you want something with every bell and whistle out of the box, Golang is going to be a disappointment generally as that is not the Golang mentality.

There are of course sufficient frameworks in Golang these days to provide such things though. Hence the article clearly says "standard library" to indicate that it isn't the last word on how to setup a REST service in Go.

jrockway 1195 days ago [-]
Good introduction. A few thoughts:

1) Be careful with locks in the form "x.Lock(); x.DoSomething(); x.Unlock()". If DoSomething panics, you will still be holding the lock, and that's pretty much the end of your program. ("x.Lock(); defer x.Unlock(); x.DoSomething()" avoids this problem, but obviously in the non-panic case, the lock is released at a different time than in this implementation. Additional tweaking is required.)

Generally I don't like locks in the request critical path because waiting for a lock is uncancelable, but in this very simple case it doesn't matter. For more complicated concurrency requirements, consider the difference between x.Lock()/x.Do()/x.Unlock vs. select { case x := <-ch: doSomethingWithX(x); case <-request.Context().Done(): error(request.Context().Err()) }. The channel wait can be cancelled when the user disconnects, or hits the stop button in the error, or the request timeout is reached.

2) Long if/else statements are harder to read than a switch statement. Instead of:

   if(foo == "bar") {
      // Bar
   } else if (foo == "baz") {
      // Baz
   } else {
      // Error
   }
You might like:

   switch(foo) {
   case "bar":
     // Bar
   case "baz":
     // Baz
   default:
     // Error
   }
These are exactly semantically equivalent, and neither protect you at compile-time from forgetting a case, but there is slightly less visual noise. Worth considering.

3) I have always found that error handling in http.HandlerFunc-tions cumbersome. The author runs into this, with code like:

   foo, err := Foo()
   if err != nil {
      http.Error(w, ...)
      return
   }
   bar, err := Bar()
   if err != nil {
      http.Error(w, ...)
      return
   }
Basically, you end up writing the error handling code a number of times, and you have to do two things in the "err != nil" block, which is annoying. I prefer:

   func DoTheActualThing() ([]byte, error) {
      if everythingIsFine {
          return []byte(`{"result":"it worked and you are cool"}`), nil
      }
      return nil, errors.New("not everything is okay, feels sad")
   }
Then in your handler function:

   func ServeHTTP(w http.ResponseWriter, req *http.Request) {
      result, err := DoTheActualThing()
      if err != nil {
         http.Error(w, ...)
         return
      }
      w.Header().Set("content-type", "application/json")
      w.WriteHeader(http.StatusOK)
      w.Write(result)
   }
In this simple example, it doesn't matter, but when you do more than one thing that can cause an error, you'll like it better.
AWebOfBrown 1195 days ago [-]
In the latter example, the question is really one of how tightly you wish to couple the application layer to that of the infrastructure (controller). Should the application logic be coupled to a http REST API (and thus map application errors to status codes etc), or does that belong in the controller?

I don't disagree that it's more practical, initially, as you've described it. However, I think it's important to point out the tradeoff rather than presenting it as purely more efficient. I've seen this approach result in poor separation of concerns and bloated use cases (`DoTheActualThing`) which become tedious to refactor, albeit in other languages.

One predictable side effect of the above, if you're working with junior engineers, is that they are likely going to write tests for the application logic with a dependency on the request / response as inputs, and asserting on status codes etc. I shudder to think how many lines of code I've read dedicated to mocking req/res that were never needed in the first place.

jrockway 1195 days ago [-]
I don't think it's the worst thing in the world if you test your http.Handler implementation:

   w := httptest.NewRecorder()
   req := httptest.NewRequest("GET", "/foo", nil)
   ServeHTTP(w, req)
   if got, want := w.Code, http.StatusOK; got != want {
      t.Errorf("get /foo: status:\n  got: %v\n want: %v", got, want)
   }
   if got, want := w.Body.String(), "it worked"; got != want {  
      t.Errorf("get /foo: body:\n  got: %v\n want: %v", got, want)
   }
It leaves very little to the imagination as to whether or not ServeHTTP works, which is nice.

Complexity comes from generating requests and parsing the responses, and that is what leads to the desire to factor things out -- test the functions with their native data types instead of http.Request and http.Response. I think most people choose to factor things out to make that possible, but in the simplest of simple cases, many people just use httptest. It gets the job done.

AWebOfBrown 1195 days ago [-]
I don't think it's poor to test http handling either, as a coarse grained integration test.

The problem I've seen is over-dependence on writing unit tests with mocks instead of biting the bullet and properly testing all the boundaries. I have seen folk end up with 1000+ tests, of which most are useless because the mocks make far too many assumptions, but are necessary because of the layer coupling.

This was mostly in Node though, where mocking the request/response gets done inconsistently, per framework. Go might have better tooling in that regard, and maybe that sways the equation a bit. IMO there's still merit to decoupling if there's any feasibility of e.g. migrating to GraphQL or another protocol without having to undergo an entire re-write.

morelisp 1194 days ago [-]
> I don't think it's poor to test http handling either, as a coarse grained integration test.

Sorry to spring a mostly-unrelated question on you about this, but why do you call this an integration test? I recently interviewed three candidates in a row that described their tests in this way, and I thought it was odd, and now I see many people in this thread doing it also.

I would call this a functional or behavioral test. For me a key aspect of an integration test is that there's something "real" on at least two "sides" - otherwise what is it testing integration with? Is this some side-effect of a generation growing up with Spring's integration testing framework being used for all black-box testing?

(I will not comment about how often I see people referring to all test doubles as "mocks", as I have largely given up trying to bring clarity here...)

AWebOfBrown 1194 days ago [-]
The reality is that I've heard unit, integration and e2e almost entirely used interchangeably, maybe except the former and latter. I don't think trying to nail down the terms to something concrete is necessarily a useful exercise. Attempts to do so, imo, make subjective sense in the terms of the individual's stack/deployment scenario.

To me, it's a contextual term much like 'single responsibility'. In this case, the two "sides" of an integration test are present. A consumer issues a request and a provider responds accordingly. The tests would ascertain that with variations to the client request, the provider behaves in the expected manner.

At which point you might point out that this sounds like an e2e test, but actually using the client web app, for example, might involve far more than a simple http client/library - in no small part because the provider can easily run a simple consumer in memory and avoid the network entirely. E2e tests tend to be far more fragile, so from the perspective of achieving practical continuous deployment, it's a useful distinction.

integration tests in this instance: varying HTTP requests (infrastructure layer) provoke correct behaviour in application layer.

e2e: intended client issues http requests under the correct conditions, which provokes certain provider behaviour, which client then actually utilises correctly.

This, to me, is why the most important part of testing is understanding the boundaries of the tests. Not worrying about their names.

eliben 1195 days ago [-]
Thank you for the detailed comment!

Just want to address (1) quickly. As you've mentioned at the end of the parenthesized note, the reason I did not use `defer` here is to avoid the lock staying across the rest of the handler. I wanted to confine it to the datastore interaction.

Thinking more about this now and having read the comments, I'm considering to just hide the lock in TaskStore and avoid all these explicit locks/unlocks in handlers; it seems like it will avoid some confusion for folks reading the example (as well as quite a few lines of code!), and my goal here is really the HTTP server logic. I prefer to deflect any attention from TaskStore in this series of posts.

physicles 1194 days ago [-]
If you only want to hold the lock for a portion of a function, pull that part out into its own function. Makes the code much easier to reason about. I consider .Unlock() without defer to be a code smell in nearly all cases.
mcdoker18 1195 days ago [-]
Good advice! A small addition in the third point: you can create a separate interface for HTTP errors, for example:

    type HTTPError interface {
        GetHTTPCode() int
    }
    func ServeHTTP(w http.ResponseWriter, req \*http.Request) {
        result, err := DoTheActualThing()
        if err != nil {
            statusCode := http.StatusInternalServerError
            if httpError, ok := err.(HTTPError); ok {
                statusCode = httpError.GetHTTPCode()
            }
            http.Error(w, ..., statusCode)
            return
        }
        w.Header().Set("content-type", "application/json")
        w.WriteHeader(http.StatusOK)
        w.Write(result)
    }
You can apply the same approach for the HTTP body in case of error.
klohto 1195 days ago [-]
Would you please expand more on your first point regarding using channels instead of Locks? It’s hard for me to wrap a head around it without practical example.
1195 days ago [-]
Philip-J-Fry 1195 days ago [-]
Not the OP but basically imagine that instead of locking a mutex to handle synchronised writes, you spawn a goroutine which just reads from a channel and writes the data.

If that goroutine hasn't finished processing then the channel will be blocked, just like a mutex.

So in your handler you can use a select statement to either write to the channel OR read from the request.Context().Done(). The request context only lives as long as the request. So if the connection drops or times out then the context gets cancelled and a value is pushed onto the done channel and your read is unblocked.

Because you use a select statement then which ever operation unblocks first is what happens. If the write channel unblocks then you get to write your value. If your request context gets cancelled then you can report an error. The request context will always get cancelled eventually, unlike a mutex which will wait forever.

makeworld 1195 days ago [-]
> Be careful with locks in the form "x.Lock(); x.DoSomething(); x.Unlock()". If DoSomething panics, you will still be holding the lock, and that's pretty much the end of your program.

Interesting, thanks. But isn't panicking the end of your program anyway? Could you provide another example where no using defer causes problems?

acrispino 1195 days ago [-]
Not necessarily. Panics can be recovered and the stdlib http server recovers panics from handlers.
dastx 1195 days ago [-]
Is there a reason to prevent concurrent reads? Seems rather .. Silly considering the db should handle all that stuff.

Also, one thing I'm still waiting for is a decent non-complex di that works using go generate. I'm aware of google's wire (?) But I've looked at it multiple times and I still struggle to understand it.

throwaway894345 1195 days ago [-]
I've never understood the value proposition of a DI framework. Why would I want one when I can initialize my objects in main()? Is XML or JSON or whatever really that much more pleasant than wiring together Go objects?
orisho 1195 days ago [-]
I'm with you on this. Dependency injection seems to me to replicate some of the features of interfaces while introducing complexity because the injection is indirect -- instead of having code that initializes a different object, it all happens dynamically during runtime using reflection. An antipattern as far as I'm concerned. It seems to achieve little or no gain at a very high cost.
specialist 1194 days ago [-]
Me three.

DI and IoC are for people ignorant of or hostile towards composition.

Further, aspects and reflection are for those unwilling or unable to make reasonable architectural assumptions.

Said another way, meta programming is for personal projects. And maybe for small, disciplined, high trust teams.

Most devs are average (axiomatically) and most projects are CRUD or scraping. So choice of these tools is self soothing to mitigate inferiority complexes. Like Mensa.

throwaway894345 1194 days ago [-]
> DI and IoC are for people ignorant of or hostile towards composition.

This isn't helped by the messy terminology. "Dependency Injection" is literally another term for "composition", but dependency injection frameworks imply automating the composition of one's object graph. However, people who like these frameworks don't seem to be aware that they can more easily compose their object graph using the structures available in most general purpose languages (literals for lists, maps, structs, etc as well as function calls and so on).

Arguably there's some repetition in constructing an object graph (initializing a list of objects that vary only slightly) that one might want to DRY up, but we already know how to do that with helper functions in host languages, and anyway this is just boilerplate--it's almost certainly not where your bugs are, and it's not where your developers are spending their time. A framework introduces a bunch of complexity at the top level (near main()) i.e., the stuff that everyone from developers to testers to operators/sysadmins will probably need to dig into at some point, and all that for no material advantage to anyone.

specialist 1194 days ago [-]
Agree on all points.

VRML-97 is the near pinnacle of human achievement, for declaring scene graphs, a special use case of object graphs. It had reuse. It had patch cords (specify non parent-child relations). It only lacked path expressions.

I really I wish I could tell younger me to publish my own VMRL successor, way back when. Young me forfeited when confronted by XML's JavaScript-like metastasis, which had overwhelmed all rationale human endeavors. Then maybe I could have spared humanity the indignity of JSON and kin.

Had I only known that all bouts of irrational exuberance eventually implode...

throwaway894345 1195 days ago [-]
It sounds like people just want a little bit of dynamic typing to assemble their object graph or something, but that's such a small (possibly negative) value add for such a steep price (everyone who might touch this software--new team members, system administrators, whoever takes over maintenance, etc--now needs to know the DI framework for even the most basic troubleshooting).
isbvhodnvemrwvn 1195 days ago [-]
It doesn't have to happen at runtime. For instance in Java you have annotation processors which generate the boilerplate for you (in DI space I'm only aware of Dagger 2 that does it). It's a bit inconvenient but you can easily audit what's going on. That being said I somehow doubt a similar tool is used with go.
sbergot 1195 days ago [-]
You can initialize your object with main. But if you are reusing some implementation in multiple object you will start to repeat yourself a lot in the initialization.

A DI framework allows you to set conventions to avoid repeating yourself in the initialization phase. (ie if a class requires a parameter 'foo', look for a class named 'FooImpl').

throwaway894345 1195 days ago [-]
I guess this makes sense, but DRYing up my object graph assembly doesn’t seem particularly important, and when it gets excessive I’d just create helper functions. This way everyone who touches my code (including the poor souls who operate it downstream) don’t need to understand my DI framework (how the DSL files are loaded and mapped onto code) for even the slightest debugging.
valenterry 1195 days ago [-]
Could you give a more concrete example? How would you "repeat" yourself?
permille42 1195 days ago [-]
The purpose of DI is to allow the use of a DSL to instantiate and connect objects, with configuration for those objects embedded into the DSL so that the setup and the way things work can be changed quickly without altering code.

Mocking objects for testing purposes and swapping them for the real objects is also something commonly done that is helpful.

There is no "real" DI for Golang as far as I've seen. The only DI I've seen that effectively do the sort of thing I am describing are for Java.

throwaway894345 1195 days ago [-]
I guess maybe this would be useful for a language like C or old-school C++ where you had to imperatively build your maps and lists and allocate memory and explicitly type your variables and so on, but Go memory allocation and typing are implicit and maps and lists have a nice literal syntax so I don't see any value in a DSL.

As for testing, you can already swap out real objects for test objects--that's a property that interfaces provide; I don't see how a DI framework (e.g, the DSL) helps you here.

I will say that when I've had to operate a Java application, trying to do even the smallest bit of debugging had me digging through all sorts of layers of XML nonsense (Spring, various Spring plugins, and who knows what else) to figure out where the logs were being written. Note that my objection isn't "XML" (versus some other format/DSL), but rather the pointless indirection. I'm sure someone proficient in Java would have no problem, but now your sysadmins need to be Java developers in addition to system administrators (maybe not such a big deal if your shop is already a Java shop and you have people you can ask, but for people operating third party software this is a real pain point). And again, all of that tedium for no apparent value.

permille42 1195 days ago [-]
The argument for and desire for DI is closely tied to the low code movement. It is tied to configuration as code.

It is certainly a different mentality from straight out coding everything.

I think it is important and will grow bigger in time, because low code is a type of metaprogramming.

You don't "need" to use low code stuffs or do metaprogramming, but if you know how and learn it well you can get much more complex things done quicker than coding everything.

The stuff you are objecting to is due to Spring and such seeking to be a type of generalized metaprogramming instead of focusing on custom DSLs. A custom DSL will have less verbosity, not more.

In a good low-code setup, all your logs should go to the "log" module, and there should be very simple configuration indicating where the logs go.

lostcolony 1195 days ago [-]
By DSL...you mean XML/JSON? Because that's literally the only thing I've ever seen used for DI purposes. And then invariably there's still just two versions of any given injectable interface; the one used in production, and the one used in testing.
jolux 1195 days ago [-]
DI is a decoupling technique. You might only have two interfaces to begin with, but the rough idea is that writing to interfaces and using IoC allows you to make many changes by adding code without having to change old code.
sagichmal 1195 days ago [-]
Sure, but you don't need a framework or DSL for this in Go.
jolux 1192 days ago [-]
You don't need it, strictly speaking, in any language. But as applications scale, often an IoC container makes managing lots of dependencies easier.
virmundi 1195 days ago [-]
I don't need them in Java or Typescript. The benefit is that I don't have to write these boring, but necessary pieces. As applications grow larger, especially with Go's desire to have an interface with only one method/function, DI requires a lot of boilerplate.

If there was a DI for go that used generate, then I would have compile time checking of dependencies. This would satisfy the community's sense of purity while satisfying my sense of annoyance at having to write this same process for every project.

sagichmal 1195 days ago [-]
> The benefit is that I don't have to write these boring, but necessary pieces

The component graph of your application isn't boring, it's the most important part of the thing, and the starting point for anyone trying to build a mental model of the thing. It should be front and center, never hidden away behind generated code.

skjfdoslifjeifj 1195 days ago [-]
> The benefit is that I don't have to write these boring, but necessary pieces.

The "boring, but necessary pieces" are usually just a call to a constructor. In my opinion it's almost never worth using a DI system that obfuscates dependency resolution and usually can't be checked at compile time just to avoid that.

throwaway894345 1195 days ago [-]
Agreed. It seems silly to call "invoking constructors" "boring" but doing all of the same work in XML is somehow more interesting? It seems like you're just adding in a layer of indirection that does nothing besides exchange Go/Java/etc for XML/Groovy/etc at the expense that one must be familiar with the DI framework to understand how the DI files are loaded, linked together, and mapped to your application code.
permille42 1195 days ago [-]
The main benefit of constructing objects and tying them together via DI is to allow polymorphic handling of responsibilities.

It is more complex than simply "I just make an interface and make everyone agree on that interfere". Everyone agreeing on the interface to use is very unlikely.

The underlying data in different implementations will be different. This is something Golang fails terribly at because it doesn't have polymorphism.

This is, I believe, why there are so few DI systems for Golang, and most of them are of the sort you are referring to as silly.

The way Golang works, and the reccomended patterns for Golang are somewhat anti-DI.

throwaway894345 1195 days ago [-]
> The main benefit of constructing objects and tying them together via DI is to allow polymorphic handling of responsibilities.

This is already provided by interfaces, as previously discussed. To be clear, dependency injection makes sense; however, dependency injection frameworks don’t make sense to me.

> The underlying data in different implementations will be different. This is something Golang fails terribly at because it doesn't have polymorphism.

Go definitely has always had polymorphism; that’s the whole point of interfaces.

> The way Golang works, and the reccomended patterns for Golang are somewhat anti-DI.

DI (assemble your object graph in main() instead of distributing it across dozens of constructors a la OOP) is idiomatic Go; DI frameworks are not.

permille42 1195 days ago [-]
Interfaces don't allow polymorphism, because they don't allow you to change the underlying data. The main problem is that you can't ( or at least aren't supposed to ) use any pointers and especially not pointers that point to different data types in different situations.

This sort of behavior is core to polymorphism. It can be done in three ways in Golang ( and probably more too... ):

1. Use serialized messages in channels to do all messages to objects ( disgusting imo... ) It would though at least let one emulate the behavior of message passing / routing languages ( pursuant to original visions of smalltalk etc )

2. Use "unsafe pointers" and just do everything the pay you would in C, deliberately going against the way Golang authors want you to do things.

3. Use reflection and messy if/else in combination with code-generation at compile time. ( this is what a bunch of Golang DI systems do :( )

I don't think you understand polymorphism very well.

I don't give a shit what people are calling DI frameworks these days. I also don't much care for things that simply instantiate a bunch of objects and tie them together. That is only a very elementary variety of metaprogramming.

Essentially, what I am claiming is the Golang is a bad language for metaprogramming, and that in other languages the DI systems they have have become a type of metaprogramming that I think is respectable.

sagichmal 1195 days ago [-]
Interfaces absolutely express a type of polymorphism: any concrete type that satisfies the interface can be used in its place. What makes you think otherwise?

> Essentially, what I am claiming is the Golang is a bad language for metaprogramming

That's definitely true, and an explicit choice. Thank goodness!

permille42 1195 days ago [-]
There is no "either-or" data type in Golang. That's why.

It can only be accomplished by inefficient functional hackery.

In C, you just make a struct, have a type present in the struct, and then cast the struct pointer to extended object types to gain additional functionality.

In this way you can easily accomplish all sorts of fun things like inheritance. Message passing type designs can easily be accomplished also in C.

In Golang? Well... no. You are essentially forbidden from doing any simple casting or extension. You are essentially stuck with hardcoding the crap out of everything or making your own vcall like system build out of Golang types... which you can't really use in the way you want unless you use reflection.

What I can't understand is why anything thinks that Golang does support polymorphism. They admit it themselves. They are working on it. Only the new alpha test versions have a solution for it. The current released version is not polymorphism no matter how much you want to fucking label it that way.

You can't just go "hey it supports a little bit of what everyone knows as polymorphism". That's like saying alcohol is like orange juice because they are both bitter in some cases.

morelisp 1194 days ago [-]
Go lacks parametric ("generics") polymorphism but has interface ("duck typing") and effectively, via the syntactic sugar for embedding, also has subtype ("virtual methods") polymorphism.

It's 30 years too late to complain about three unrelated approaches to dynamic dispatch having the same name.

throwaway894345 1195 days ago [-]
I think you’re probably trying to make a substantial point but you seem to be mistaken about several things with respect to Go and polymorphism and interfaces such that I can’t figure out what your actual, substantial point is.

> Go is a bad language for metaprogramming

You’re absolutely right here.

> There is no "either-or" data type in Golang. That's why.

Correct here too, Go doesn’t have sum types. If you want sum types, you have to emulate them via interfaces. But I don’t see how that relates since all of this DI stuff seems to be dynamically typed anyway (errors at runtime) assuming you’re not taking a codegen approach anyway.

> In C, you just make a struct, have a type present in the struct, and then cast the struct pointer to extended object types to gain additional functionality.

I don’t understand what you’re trying to do here. First of all, this only works for the first field (and obviously isn’t memory/type safe).

> In Golang? Well... no. You are essentially forbidden from doing any simple casting or extension. You are essentially stuck with hardcoding the crap out of everything or making your own vcall like system build out of Golang types... which you can't really use in the way you want unless you use reflection.

As a general rule of thumb you can do almost anything in Go that you can do in C if only by delving into the unsafe package; however, “unsafe” is almost never necessary—interfaces typically suffice. You certainly can emulate inheritance if you don’t care about type-safety, just like in C. Unfortunately I can’t say more until you clarify your objective.

> What I can't understand is why anything thinks that Golang does support polymorphism. They admit it themselves. They are working on it. Only the new alpha test versions have a solution for it. The current released version is not polymorphism no matter how much you want to fucking label it that way.

I think you must mean some other word because interfaces are the canonical example of polymorphism and Go has the best interfaces in the business. :) I’ve never heard the Go maintainers claim they lack polymorphism (Go does lack type-safe generics and sum types, but so does C). In an earlier post you argued that interfaces weren’t polymorphism because they don’t let you modify the underlying data, which is patently false—this is the whole point of interfaces. In Go:

    var r io.Reader // nil
    r = someFile // *os.File
    r = stringReader // *strings.Reader
permille42 1194 days ago [-]
Whatever. You obviously don't follow what I am saying at all so why bother.
sagichmal 1194 days ago [-]
Polymorphism doesn't require casting or union types or whatever it is you're describing.
jamra 1195 days ago [-]
In Go, you can just use an interface which would make things mockable for testing.
ptr 1195 days ago [-]
But you still need to send around the object that implements the interface, right? That’s “DI”.
jamra 1195 days ago [-]
I don't think you need to do that.

Assuming a web application... You can instantiate the object in your main func and then attach it to the server struct so that it is available in every request. When mocking, you can create a mock server that instantiates mock items that implement the interface. If you need something that is contextual, you attach it to the context. In that case, you can still use mock objects, however, you would have to use a different middleware that handles the mock context objects instead of the normal middleware.

permille42 1195 days ago [-]
By DSL I mean something like how HTML is used to make websites. HTML was derived from SGML, and is related loosely to XML, but does not contain a lot of the complexities of XML.

I do not mean JSON, as JSON is not a good generalized meta-programming notation. People use it that way but it ends up ugly and hard to read.

JSON can be used alright as a configuration notation, but does not represent itself well as a metaprogramming language, since it does not preserve order or allow mixed content. You can get order by using an array of course, but it is much more verbose than the equivalent content in XML or HTML.

You might say, by DSL, I mean "custom markup languages".

sagichmal 1195 days ago [-]
The things you describe are pretty strongly understood as antipatterns in Go.
virmundi 1195 days ago [-]
I hope to spur conversation on this. I've seen many in the Go community argue against abstractions. Many say, "Pass the database connection in the params." Or "make the DB pool a global variable". How can you write tests for logic without having to instantiate a DB? Many gophers appear to say I should have essentially a giant transaction script for each handler (https://martinfowler.com/eaaCatalog/transactionScript.html). The handler opens the DB, makes it at least function scoped, and all my business logic goes in the handler, or some similar method where the DB connection is passed. Now my tests are functionally integration testings. This makes them both slow, and hard to test for proper error handling.

When I write code in most OOP like languages, I follow the Clean Architecture model. The use case is an actual struct with the interfaces defining the repositories/services as members. I am now free to test the use case in isolation. I can test failure cases to since I return an error, which can be easily mocked. I write a factory to create my use cases. The handlers get the factory passed in as an argument to the constructor for server. I can now test handlers in isolation by passing a factor with mocks.

sagichmal 1195 days ago [-]
> Many say, "Pass the database connection in the params." ... How can you write tests for logic without having to instantiate a DB?

You define and use an interface to model the DB, and use that as the mock point. Basic stuff.

throwaway894345 1195 days ago [-]
To add to your answer for those who aren't familiar with the basics of mocking, here's a StackOverflow answer that I've shared with others--maybe it will help elucidate things: https://stackoverflow.com/questions/19167970/mock-functions-...
virmundi 1195 days ago [-]
I aware of making an interface for that. I advocate that. What I see people on r/golang saying, and even in HN, is to just pass the DB connection either explicitly or in the context. I hate this. It makes things bound to DBs.
morelisp 1195 days ago [-]
A "DB connection" in Go is already several layers of abstraction. You could have real production connected to your postgres, integration tests connected to a sqlite file, and unit / functional tests via sqlmock.

Everything is still 'bound to DBs' but that's because your program needs a source of data. Faking a second data source other than the DB via a higher-level shared interface is just inviting integration failures.

virmundi 1195 days ago [-]
Would you have the DB connection be a prominent required parameter of all your functions? Would you have the bulk of your code be integration tests then?
morelisp 1194 days ago [-]
For me this depends on the application. For a REST API I'd probably open a Conn or Tx in an early middleware and carry it on a request context, or via explicit parameter, depending on the HTTP router's features. (In Go that usually means a request context, unfortunately - a more powerful type system would ideally get you some more featureful type-safe routing.) For a more RPC-like or compute-focused API that I might keep some handle to the DB (or other data source) on a method's reciever and route / dispatch to that method. (I think this is the same thing oppositelock suggests elsewhere in this thread - https://news.ycombinator.com/item?id=25807562)

> Would you have the bulk of your code be integration tests then?

I'm not sure if you mean the bulk of my code or just of my tests - for a REST API I would expect mostly functional tests. Do e.g. PUT+GET and make sure the result makes sense. This would be the case regardless of DB architecture.

sagichmal 1195 days ago [-]
Some people say this, but when they do, other more senior people quickly interject and say it's a bad idea.
throwaway894345 1195 days ago [-]
I agree. I’ve seen more prominent figures argue against global state often. Using global state is certainly not idiomatic Go.
morelisp 1194 days ago [-]
> Using global state is certainly not idiomatic Go.

Unfortunately it's not this simple (and probably multi-idiomatic).

Go definitely adopts more global state than other languages; I don't know any other language that offers a default-global HTTP client and server. Now, part of that is because Go's stdlib goes out of its way to make these appear stateless even though they are not - and this is good, even if you (often rightly) don't use the default ones.

But I think a lot of people saw those carefully engineered APIs and instead ran with "globals are OK in Go!" Lots of packages have global-level configuration properties - some of this is a hacky replacement for DI e.g. most logger injection. Well, OK, I can support/tolerate some of that because DI in these cases is usually a hacky replacement for real AOP language support. But some of it just shouldn't be global. e.g. Gin debug vs. release vs. test mode should be a setting on the Engine.

And then you get into really bad stuff - I don't know why but it is common to to have a global sql.DB, or sarama.AsyncProducer, or whatnot. A lot of novice Go developers - anecodotally predominately skewed towards previous PHP users, I think because they are not used to have really global variables - use a global for anything concurrency-safe. And this has ended up in a lot of low-quality tutorials/examples/SO questions so I don't see it going away any time soon.

permille42 1194 days ago [-]
In recent history I have worked a lot with Golang and also encountered the tendency towards globals being the recommended path.

Specifically, I encountered it with logging, frameworks( Gin ), and DBs. It is interesting to me that you called out these three things specifically having encountered them all myself.

I agree also that it is a result of having poor DI support from the language.

I also like your point about PHP users using Golang. The approach Golang uses towards serving web content is very similar to me to how initial PHP frameworks did it.

Go tends to make it easy to setup concurrent things occurring, and so I've found myself spending a fair amount of time speculating on what is thread safe and what isn't.

No matter how much I see it I still do a double take when I see globals in Go modules. Seems like a bad practice to me.

sagichmal 1193 days ago [-]
> globals in Go modules. Seems like a bad practice to me.

It is, and good packages don't have them.

omginternets 1195 days ago [-]
What makes something a "real" DI framework? Wouldn't go.uber.org/fx qualify?
permille42 1195 days ago [-]
I use "real" in quotes to indicate I am meaning something other than the general meaning of the word "real". What I mean by this is a specific type of DI that I view as effective and a type of meta-programming. Such a DI system does more than just auto-instantiate objects. It also provides a DSL that allows for configuration and ordering of the object instantiation during different phases of system execution.

fx doesn't do that. It is primarily just a convenience way for tying things together in the single intended way. You can provide some configuration but it is not via a DSL; it is by making calls to the fx library.

Realistically to do the sort of DI I am referring to with Golang there would need to be a preprocessing step during compilation that generates Golang code.

omginternets 1193 days ago [-]
I see. Echoing the sibling comment: can you provide a bit more context for the kind of programming that you do? When does such a 'scriptable' DI framework become necessary/preferable? My initial reaction is one of aversion, which makes me think I haven't encountered the kind of problem where scriptable DI solves more problems than it creates.

The closest thing that comes to mind is writing software with some sort of global configuration. i.e.: the user provides some options (CLI flags, environment variables, configuration files, etc.) to change the runtime behavior of the software (e.g. use filesystem storage vs S3 storage, etc.).

In Go, I've always solved this pretty straightforwardly by combining Fx and functional options [1].

[1] https://dave.cheney.net/2014/10/17/functional-options-for-fr...

throwaway894345 1195 days ago [-]
Why do you need a DSL. Go already does the things you describe with so little boilerplate that I don’t see what value a DSL could provide to justify its own complexity.
tptacek 1195 days ago [-]
With respect to DRY'ing the JSON code, isn't something like this workable:

    err = json.NewEncoder(w).Encode(&task)
I know there used to be a reason why this was disfavored but thought it had been addressed in the stdlib.
bpicolo 1195 days ago [-]
Defining them all on a single server struct means once you're past a handful you start having a hard-as-heck to organize folder of handlers (or a bunch of massive code files).

How are folk managing Go endpoint as apps scale across number of endpoints? I'm skeptical that one struct in one folder leads to good app structure, and have seen this start to break down in a few cases in practice

oppositelock 1195 days ago [-]
It depends on what the struct contains. I have developed many Go API's professionally at several companies since Go 1.1, and all my servers and up looking like a server struct with only a few fields - a database, AWS client object, and some prometheus metrics. The logic is typically split among many files, all implementing receivers on that struct.

If you have independent, different elements in that API, you break them out into separate "servers" but still register the endpoints on the same HTTP handler.

I know that people don't like external libraries too much, but I'd like to plug my own here. You declare your API in OpenAPI 3.0 (aka, Swagger) and it generates your server and models for you, so all you need to do is write the business logic. (https://github.com/deepmap/oapi-codegen)

bpicolo 1195 days ago [-]
Thanks for the link - I've had trouble finding pretty much exactly this in the Go ecosystem after a few brief looks, and codegen is really the only sensible way to do openapi in go. I wouldn't be surprised to use this soon
barefeg 1195 days ago [-]
Is there a spec-first framework for go that also follows a middleware paradigm?
yolo42 1195 days ago [-]
Yes. Please take a look at grpc-gateway. You write your specs as proto files and then get handlers and middlewares to implement the endpoints.
alisause 1195 days ago [-]
I was a really bad girl. Punish me with your dick in my mouth. - https://adultlove.life
alisausa 1195 days ago [-]
Top burny busty chicks only on this site! Follow the link, and you won’t be sorry! - https://adultlove.life
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 04:24:31 GMT+0000 (Coordinated Universal Time) with Vercel.