simonw 282 days ago [-]
This is a really cool product - but it's a very different beast from Now 1.0.

Now 1.0 really was my perfect deployment tool: you can HTTP POST it a Dockerfile and it will build that container for you, start it running and start routing HTTP traffic to it.

This means you can use it for essentially anything that can be expressed in a Dockerfile. And the Dockerfiles you use work anywhere that runs Docker, so local development is really simple.

Now 2.0 is a totally different thing. It's a way of running HTTP fronted lambda-style functions written in Go, Node, Python or PHP. It looks like you can add other platforms by writing a new builder but there aren't any community-driven examples of that just yet.

It's cool, but it's a completely different product from Now 1.0 - and for my particular use-cases (my Datasette project and serving machine learning models) it sadly doesn't appear to be suitable.

I kind of wish this was called something else - "Zeit Lambda" perhaps - and old Zeit Now with the Dockerfiles could live on as a separately named project (rather than being the legacy earlier version of Zeit Now).

Rauchg 282 days ago [-]
We are quite confident that 2.0 can meet all the customer needs of 1.0, but going far beyond in terms of scalability, at a much lower cost.

A great example of this is "doing work in a loop". Sure, you can spin up a process and try to schedule work yourself. At that point you're responsible for monitoring, and addressing all its failure modes.

In the 2.0 world-view, you can delegate a lot of the tough synchronization and scalability problems to the platform instead.

simonw 282 days ago [-]
I think the main problem I'm having is that 1.0 was already cheap enough and scalable enough for my specific use-cases.

I totally understand how 2.0 is cheaper and more scalable, but the trade-offs are that it's much less suitable for serving occasional requests for data that depends on large server-side files (in my case machine learning models and SQLite databases, both which can be 100s of even 1000s of MBs).

I guess what I'm saying is that for my specific use-cases I don't need a Now that's cheaper and scales more easily - 1.0 already had the performance and price that I needed.

Now 2.0 (or Now Lambdas) do look amazing for cheaply handling a vastly scalable amount of incoming requests that don't need to work with a large blob of data on the server. Now v1 remains my perfect environment for hosting scale=1 persistent servers that can serve those large data blobs. I'm finding it hard to imagine a platform that can combine the best of both worlds - but I'm exciting to see if you can indeed pull it off with 2.0.

silverstrike 280 days ago [-]
I loved Now 1.0, disappointed in 2.0. It's not accurate that all customer needs of 1.0 can be met. There are size limits deploying to AWS lambdas. Web sockets don't work in lambda. Caching is a different game if your functions can spin up and down often, not to mention no sticky sessions. DB connection limits become more relevant if you have a bunch of endpoints and put them all on their own lambdas since they can't share connections.

There's value in distributed architectures, but there are already lots of ways to add that complexity when you need it. I really liked what Now 1.0 was doing (more user-friendly than, so this makes me somewhat sad.

There are very cheap competitors such as Apex Up for managing AWS lambdas if people want to go that route.

I hope I'm wrong about some of the above and you have plans I'm simply not foreseeing to address these issues.

jacques_chester 282 days ago [-]
Running docker daemons that handle arbitrary code in a multi-tenant scenario is a security and scaling nightmare. I don't blame them for trying to get out of the morass.
sholladay 282 days ago [-]
I have been a paid customer of Zeit for a while and have really loved their products thus far. But v2 is making me reconsider my plans. The other day, I tried to scale an existing app to an additional region and I promptly found out that Node apps are not supported in their newer regions, such as GRU. The new regions only support v2 and can't even run a Hello World without Docker, which is completely stupid. I'm not going to add any bloat to my app just to use those regions. And even if I did, I wouldn't want to go "serverless" any time soon, given the much more mature and robust ecosystem for traditional server frameworks. Lots of things around serverless are still being figured out. Yes, it is totally doable to run some things that way in production, but it's not yet what I would personally choose for most of my apps.
igneo676 282 days ago [-]
If I'm not mistaken, V1 just takes normal Node apps and wraps them in a Docker container. At least, that's how they handle deploying to platforms outside of Zeit such as AWS. V2 just makes that process explicit and normalized across all languages/frameworks
sholladay 282 days ago [-]
I believe you are correct that v1 uses Docker behind the scenes. And that's fine with me so long as it doesn't infect my project, which hasn't been a problem until now. The beauty of Zeit's v1 platform was precisely that it would take my code and run it smoothly with no fancy configuration or modifications. It Just Works. I generally prefer explicit behavior, but this is a case where the behavior is not something I want to know or care about. My app should run on any common platform without special configuration or modification. And it does, everywhere... except on Now v2. I wanted to like it, but the desire is just not there. Maybe I'll keep using them for DNS but host elsewhere. Zeit's command line experience is top notch and I'd like to keep that aspect where I can.
Rauchg 282 days ago [-]
To be perfectly clear: Now 2.0 is already fully rolled out to all regions.

As a matter of fact, this is an example of executing `now --regions all` :)

sholladay 277 days ago [-]
I ran that exact command a few weeks ago and it tried to scale my existing app to GRU (which is what I want!) and it immediately threw an error saying that GRU requires the v2 Docker serverless crap. Not gonna happen! Maybe it's ignoring GRU now, but that just leaves me without a region that I wanted...
diminoten 282 days ago [-]
Docker's pretty great, I recommend at least trying to put a simple container around your app, if that'll get you onto v2. It's probably a lot easier than migrating your whole app off of their platform.

Also, look at it from their perspective. They support everything when they support Docker, no need to have specific tooling for Node.JS.

sholladay 282 days ago [-]
I have no use for Docker. It's literally just an extra layer that does nothing useful for me. npm starts the server and I'm good to go: npm start for production, npm run start-local to run with a local database. Keep It Simple, Stupid. I see so many people using Docker as their scripting file, even though it actually adds complexity. I have no problem with it existing for those who need it, but most apps absolutely do not benefit from it. In fact, I've occasionally had to write code to handle Docker containers in a special way... completely defeats the purpose of normalizing the environment.
wishinghand 282 days ago [-]
Docker was needless before v2 and then making us add it to just run ‘npm start’ is kind of dumb.
kristiandupont 282 days ago [-]
While this is cool and surely an improvement technically, it makes it clear that I will have to move off of Now. That makes me sad as I immediately became a paying customer and have loved the product more and more each day.

It’s not that I couldn’t upgrade, surely I could. It’s inconvenient but not impossible. The problem is that I need my hosting to be as reliable and invisible as possible. Being informed that I need to change my infrastructure out of the blue is unacceptable unless it’s because there is a security reason or perhaps because I am using severely outdated technology. I will now have to spend precious development time figuring out which alternative will match my build process etc.

brod 282 days ago [-]
This is the only disappointing release I’ve seen from Zeit, their original mission statement was to make “cloud computing” accessible to everyone and with Now v1 they killed it, providing a truly an awesome product. Now v2 is a pivot away from everything that made them unique, soon they'll be mistaken for a trendier version of Netlify...
Rauchg 282 days ago [-]
You'll find that nothing could bring us closer to this vision than making compute so remarkably inexpensive and scalable, out of the box, for everyone :) This is why serverless is so exciting to us.
ontouchstart 281 days ago [-]
I took a look at this

It seems that it might be possible to write a bulder like @now/jupyter to automatically deploy Jupyter notebooks with a requirements.txt. The questions are:

1. Is your team going to write it?

2. Is it possible for a third party to write it and put in the now.json?

      "version": 2,
      "builds": [
        { "src": "*.py", "@thirdparty/jupyter" }
conceptpad 281 days ago [-]
Not to speak for @rauchg and the team, but I think the answers are 1: no (or not soon) and 2: yes
ontouchstart 281 days ago [-]
I made a proof of concept project to wrap a Jupyter Notebook with Now 1.0 via Dockerfile

How could I achieve it in Now.2.0? Should I wrap Jupyter server with a custom gateway Before I give it a try, I’d like to hear advice from your team.

k__ 281 days ago [-]
I don't understand why so many people cling to VMs/containers.

I think you did the right thing

mindfulmonkey 281 days ago [-]
Now v2 requires a fundamental change in how development happens and how the application is architected. It is too prescriptive.

VM/Containers allow people to develop and structure the app however they want, largely decoupled from deployment/infra.

I don't understand how many people have no issue being told to rewrite your apps so they fit inside lambdas...

k__ 281 days ago [-]
v1 isn't gone.
ZeikJT 277 days ago [-]
If they were marketed as different products then there didn't be quite as much panic but since this is V2 and V1 people are afraid that the product they like will be unsupported soon since it's an older version.

Also, while V1 still exists for the time being it is fair to assume that future updates will only apply to V2. One user on here already found this to be true because although a new region was added, GRU, they could not deploy to it with V1 and was told it only supported V2.

StanAngeloff 282 days ago [-]
I was considering investigating Now/Zeit as a possible evolution over our current stack. Having seen this announcement I'm not so sure that is still a good idea. Now 2 is a completely different beast to Now 1. The very last thing we need is churn in our servers/infrastructure rivalling our frontend. Is Now 3 going to be [dramatically improving the reliability and scalability of -our- deployments] yet again?
Rauchg 282 days ago [-]
Keep in mind that the the intelligence that goes into the builders (which are OSS, which you can write your own of) is specifically around not making code changes.

Surely, some code changes are necessary to go serverless (when considering legacy servers and apps), but our goal is to always minimize that.

ericand 282 days ago [-]
A couple significant callouts:

(1) "Behind the scenes, Now 2.0 works like an extensible build system and compiler, capable of transforming your sources into static files and serverless functions (lambdas) for production."

The demo is pretty slick. I've seen other frameworks, like, where your lambdas are more explicit, but I'm curious to try this autogenerated approach.

(2) "The Now 2.0 platform features include: A unified deployment type: All deployments are one type, regardless of static or dynamic parts Massive build parallelization: Each deployment can kick off many concurrent serverless builds. Monorepo support: Define API endpoints in Go, PHP, Node.js, Next.js, and such, in just one repository. Zero-instruction builds: Our open-source builders take the build and cache config burden away. Universal Cloud: Our platform leverages the best cloud infrastructure, with no lock-in or config. With the following pricing improvements: Fully on-demand pricing, Per-100ms pricing for compute, Free seats for small teams Including a better free-tier for our community."

It feels like all the Zeit features are coming together for a more complete offering. I love it.

Rauchg 282 days ago [-]
Auto-generation is a great way to put it. Cloud functions are an incredible primitive, but in our view, they constitute primarily a compilation target.

Some small adjustments will be necessary when _importing_ large legacy codebases, but that's where using functions to also build your project comes in!

playpause 282 days ago [-]
Can you point to an example of using functions to build your project?
tima101 282 days ago [-]

What fraction of Now customers care about cost and scalability so much that they will invest time into splitting monolithic server into many lambdas (a monorepo). (I've read in Now 2.0 docs that it is recommended to have 1 lambda per 1 API route.) EDIT: After splitting, is there overhead in managing many lambdas vs monolith server?

Once a server is split and configured, will these customers be able to migrate out of Now 2.0? From what I see right now, it is basically a vendor lock-in.

Is it possible that majority of Now's customers are websites, simple CRUD apps, small SaaS apps and they don't care about cost and scalability that much and won't add lambda-related overhead to their code?

Do you guys try to shrink your customer base and work with perhaps bigger clients with critical apps who are more paranoid about scaling, cost and want to invest in lambdas?

Sorry about typos.

tapsboy 281 days ago [-]
I have similar questions if @rauchg is still reading. We just got a premium account and was setting up Dockerfiles and now.json based on Now 1.0.

All, I want to do is provide a Dockerfile that included a cross-platform build process. Including now specific build tools is not ideal for avoiding vendor-lock in.

Now has amazing for us to build prototypes, internal sites etc that we may eventually move to our standardized AWS infrastructure. Adding now specific build tools makes the process unusable for us

I understand from the docs that Now 1.0 is not deprecated yet, but it bothers me to invest more time in converting some of our internal projects to Now

ianstormtaylor 282 days ago [-]
Potentially noob question... but how does having a lambda function per route work with connection pooling for something like Postgres?

Also, does ZEIT have any plans to offer hosted databases like Heroku and others do? That's one the biggest blockers in my mind to trying it out.

MatthewPhillips 282 days ago [-]
That depends on the provider and the database. For AWS Lambda with DynamoDB connection pooling is handled "behind the scenes", and you don't have to worry about it. AWS supports SQL databases as well, I would assume it also is pooled for you.

NOW currently doesn't have databases and tells you to use cloud DBs. So more than likely the connection is dropped when the lambda shuts down.

But if you'll let me go a little bit meta, the idea of serverless is that you shouldn't even be thinking about things like connection pooling. If you just use the databases that your provide supports those types of concerns are their concerns now, not yours.

In other words, if you want to pick your own tech stack then serverless is not for you (yet). If you're willing to surrender your opinions and just use what the platform provides you (if you're using AWS just use DynamoDB) then you can stop thinking about these problems and focus on your business logic.

ianstormtaylor 282 days ago [-]
Thank you for the detailed answer! That makes sense. Although I still find it strange... Now markets itself as preventing lock-in because it can deploy to any cloud, but to realistically use any database with it you have to lock yourself in to one of the cloud providers?
Aeolun 282 days ago [-]
To expand upon the previous answer. Connection pools are still a problem.

Even if your lambda drops a connection after it destroys itself, a sudden influx of visitors to your site may spawn enough lambda processes to overload your DB.

What happens then is dependent on the database, but generally means that new lambdas do not get to connect.

Anyhow, you don’t really want your database to magically more around regardless, so you’d be sort-of locked in to one service regardless of what you do.

But postgres on one service is pretty much the same as on another, so if you wanted to migrate it would be fairly easy.

Rauchg 282 days ago [-]
What the parent comment is correctly saying is that for certain databases (that are oriented around stateful connections) you might benefit from a infrastructure-level optimization.

However, this is not the case for all databases. Furthermore, modern database vendors are extremely motivated to address the serverless usecase

nickdandakis 282 days ago [-]
I too look forward to when ZEIT revolutionize datastores. Immutable datastores with the ZEIT UX (DX?) would be incredible.
ezekg 282 days ago [-]
You would likely need to use something like PgBouncer, which sits in front of Postgres, to keep connections under control.
ianstormtaylor 282 days ago [-]
So to use serverless, you need to setup an extra non-serverless instance in front of your database?
setr 282 days ago [-]
You can imagine the issue being that serverless is a stateless workload, and connection pooling is, well, stateful. Your db itself would ideally be the thing to solve that problem (its already the state-management system), but if it doesn’t, you’ll need to introduce something new to handle that state.

The problem also ofc already exists when you have multiple applications (or instances) trying to talk to your db (which is why pgbouncer exists regardless of serverless), and serverless doesn’t differentiate between 1 and 1000 instances (which is why its trivially scalable).

And ofc if you werent going to need sufficient scale to require something like pgbouncer... should you even be transitioning to serverless? You could replicate the same stateless organization style on your own, on a single server.

coder543 282 days ago [-]
Or, you could run it on the same server that you're running your database on.

If you're not running the database server yourself, is it so unimaginable that the service provider might have already configured PgBouncer on top of the database as an optional way to interface with it?

Regardless, "serverless" has never meant that there are no servers involved, it just means that for the serverless pieces of your application, you don't have to concern yourself with where or how they're run, only with the application logic itself.

If you want to go 100% serverless, then you're obviously going to have to choose a DBaaS that works with your connection pooling strategy, especially if that strategy is "I don't have one." Supposedly, based on comments on this thread, DynamoDB works pretty well for that scenario. Obviously Postgres doesn't scale with an increasing number of connections very well, which is why PgBouncer exists, and it works well.

Finally, PgBouncer is relatively stateless. Its main pieces of state are a static config file and the pool of open connections it holds. This could be deployed in a serverless fashion, and any time it is destroyed or recreated, it would be trivially reinitialized just from the config. It doesn't have to have any attached, mutable storage.

vageli 282 days ago [-]
> If you're not running the database server yourself, is it so unimaginable that the service provider might have already configured PgBouncer on top of the database as an optional way to interface with it?

Are you familiar with any providers that offer this configuration? In my experience, at least AWS does not.

ezekg 281 days ago [-]
Heroku has a PgBouncer setup in beta, I believe.


ezekg 282 days ago [-]
It seems so. Though, I’m in no way advocating server-less. :)
Rauchg 282 days ago [-]
This is something that is only necessary for legacy databases. Modern database providers give you gateways that deal with a huge number of connections with no issues.
ezekg 281 days ago [-]
I don't like the term "legacy database." What is a "modern" database in your eyes that is able to handle a huge number of connections?
trevyn 282 days ago [-]
Google Cloud Spanner is a counterexample. It can take close to 2 seconds to establish a server connection.
gigatexal 282 days ago [-]
Just run a sqlite db or connect to a hosted one -- though that doesn't answer your first question
iMuzz 282 days ago [-]
I have a question (I've tinkered with Serverless stuff but never on a production app) but my underlying assumption might be wrong.

I like that the cold boot up performance for a single function is super fast when compared to a "Legacy Server". But after the legacy server is up it "stays warm" regardless of user activity. Unlike a lambda function which goes cold after a few minutes of inactivity.

So the first user that hits an endpoint after some inactivity has to wait (a few seconds) for the lambda function to cold boot vs. being served immediately on the legacy server.

Is my assumption here true? Or are cold boots on lambda super fast now? When I was doing this stuff ~8 months ago it would take like 5+ seconds to be served after a cold boot.

Rauchg 282 days ago [-]
Servers will also eventually go cold, unless you pay for them 24x7.

With the Now 2.0 design, you instead are very likely to not suffer from cold instantiation to begin with, due to the healthy constraints imposed by the system.

In other words, instead of ignoring cold boots, you just want to embrace them and make them minimal to begin with.

manigandham 282 days ago [-]
You're right, they're slow. All the major providers are working on ways to speed up boot, and some users just send requests every few minutes to keep the functions warm.
dbbk 282 days ago [-]
What's the point of that? Why not just get a normal server if you want it available 24/7?
manigandham 281 days ago [-]
But then you wouldn't be enjoying "serverless"
trevyn 282 days ago [-]
Now 2.0 cold boot is under 200ms.
wahnfrieden 282 days ago [-]
AWS Lambda cold boot is <300ms for python
adjohn 282 days ago [-]
The cold start times have been quietly getting better over time on AWS Lambda. I just checked some python Functions and I'm seeing cold starts take around 80ms on average for a simple function.
k__ 281 days ago [-]
They already released 15min functions, I bet there will be something huge on re:invent this year.
zedpm 281 days ago [-]
Hopefully they have a solution for connecting to RDS from Lambda without incurring the huge cold start time and other problems of having the Lambda inside the VPC. Ideally it would be something like PgBouncer with a security group that only allows connections from your Lambda instances.

GCP has a semi-hacky solution that provides the ability to connect to your managed dbs from your Cloud Functions; one would hope that AWS can at least match that soon.

k__ 280 days ago [-]
Some people are speculating it will be RDS without VPC.
wahnfrieden 279 days ago [-]
No speculation, they already announced it’s coming at last year’s.
navd 282 days ago [-]
Is there way to develop and test this locally? It seems like you constantly need to deploy to now to test new changes?
Rauchg 282 days ago [-]
We definitely look forward to giving you tools here. One can conceive a `now dev` process that faithfully reproduces the cloud gateway.
johnslemmer 275 days ago [-]
+1 this is very important to me, and without it won't be transitioning to 2.0.
homerjam 282 days ago [-]
Can anyone offer an explanation of how this impacts docker based deploys? Since the serverless docker announcement in August I've been hoping to move off of DO to now but it sounds like that won't be possible with a 5mb cap...?
Rauchg 282 days ago [-]
In our upgrade guide we detail how you can break down a "server" that you would otherwise deploy as a monolithic container:

The limits are there specifically to help in that transition, to avoid the problem described in the chart. They're a crucial learning extracted from the beta we ran.

cphoover 282 days ago [-]
So this doesn't work anymore: ??

Like I understand the benefits of lambdas/serverless. But they aren't sufficient for all use cases. The ability to have an endpoint for a docker container with one command is extremely useful. There already a lot of serverless options on the market too... maybe I'm confused...

mattste 282 days ago [-]
I'm in the same boat. I've spent a lot of time getting my company's deployments set-up on Now's Docker offering (which was awesome). I'm not looking for my app to be serverless. And I'm also not looking for a "Majestic Monorepo".
Rauchg 282 days ago [-]
Please keep in mind v1 will be around. We'll compel you to move by thoroughly reaching feature and use-case parity :)
gigatexal 282 days ago [-]
Can't you use now v 1.0 setting the json file now.json to have this among other things:


mattste 282 days ago [-]
NOTE: v1 is fully maintained and supported. We will only announce a deprecation date once we have ensured all our customers workloads are migrated and the tooling is in place for a smooth transition.[0]

Based on that, it seems to me like I will have to migrate to Now 2.0 eventually.


cpursley 282 days ago [-]
This is looking pretty compelling for serverless docker:
manigandham 282 days ago [-]
All of the major clouds now have (or will release soon) the ability to run a Docker container and get an endpoint, with auto-scaling.
homerjam 282 days ago [-]
I place high value on simplicity and portability so despite not being fully optimal a monolithic docker app is a good choice for me.

The reason I'm still on DO is because I need dependencies that are both large and trivial to install in docker vs. creating a custom EC2 image. To make it clear I was hoping to deploy a docker container with ffmpeg to support long running video conversions whilst benefiting from easy deploys and on demand pricing. It seems like the potential for this alluded to in the serverless docker announcement has been removed?

I can see how this V2 makes serverless simpler but the strict limits will mean that transitioning a monolithic app requires a lot of work up front. Increasing the limits would reduce that friction greatly. Whilst not best practice deploying an existing express app and have it 'just work' is a compelling reason to go serverless.

mattste 282 days ago [-]
One of the major pain-points I've hit with this concept of a "Monorepo" is tooling support since many things are built on the premise of separate repos for each code base. Pain points: Git+Github issues, CircleCI, etc. How do you handle those problems?
richthegeek 282 days ago [-]
Could you use git modules? Keep your distinct repos and have a monorepo that updates are PRed into?
sjroot 282 days ago [-]
To me, the biggest things are the pricing improvements, including the changes to the free tier. This will make it much easier for small teams to go from concept to deployment without having to worry about exposing their source code or ending up with a huge bill. Thank you all for your hard work!
alexashka 282 days ago [-]
For someone who's not in the loop, can someone give a brief explanation of what pain point this solves?

The space of 'easy deployment' and 'scale' seems incredibly crowded, from reading the headlines - what makes this different/better?

Rauchg 282 days ago [-]
As far as we are aware, no one until today had solved the problem of compiling to serverless functions.

If you look at the solutions out there, they all involve you dealing with low-level APIs or config that have little to do with how you write most apps and frameworks in mainstream programming languages.

keithwhor 282 days ago [-]
Guillermo and his team are focused on developer experience first and foremost; the zero -> “hello world” in any language they support is ridiculously quick and painless.
sooheon 282 days ago [-]
If this is hosting front-end files for me, and is "serverless", how do I hook up my db, or deal with the filesystem, etc.? Or is this only for static sites?
crooked-v 282 days ago [-]
You put database access, S3 access, etc in your lambda functions, using env vars configured in the Now account settings.
DevKoala 282 days ago [-]
Can `Now` serve as the backend for latency sensitive endpoints? My main problem with AWS Lambda a couple few years ago was the latency. In a dumb function, imagine a single K/V lookup, is * < 10ms response time a possibility?
manigandham 282 days ago [-]
Cloudflare Workers does well here with their KV keyvalue store integration.
tmvnty 282 days ago [-]
As a big Next.js fan, this is a big release to me, React is my go to front-end choice and Next.js does amazing jobs server-side rendering and many other great things too, Now v2 supports Next.js out of the box means I can use lambda functions to serve SSR React for will save me a lot of boilerplates and maintenance work, so I will definitely check out Now again probably try few demo apps out...

But all great things aside, having Now v2 tied with AWS lambda means I will have to switch to AWS too. I like to stick with one cloud provider and my current favourite is GCP/Firebase, and I will have to weigh out the pro & cons of AWS/GCP again for Now v2. Hopefully in the near future, Now could be cloud provider agonostic

Rauchg 282 days ago [-]
Very important: you absolutely don't have to switch to or configure any cloud provider.

However, it is a very healthy thing to understand the "behind the scenes" in case you need to interoperate with existing APIs or databases[1]


tofflos 282 days ago [-]
Are you looking into making builders for Java and are you looking into offering some sort of persistent storage?

I liked the 1.0 Docker-version and 2.0 feels like a completely different product. Have you considered offering both long-term?

tapsboy 282 days ago [-]
I second that. Long-running docker containers are more beneficial in many use-cases. Now 1.0 was the closest thing I found to Azure Container Instances and AWS Fargate

Now offering 1.0 style deployments on an ongoing basis will be helpful.

robotkdick 282 days ago [-]
I have been using Now in production for about a year. I fell in love with the version 1 product because it lived up to the mission, which I think went something like: make deployments as easy as using an iphone, or something like that.

I trust the leadership at Zeit are making the right decisions technically, but as the company grows, it also seems to get further away from its original mission.

The swift deprecation of previous versions is threatening to undermine any resemblance to the mission, if ease of use is still the mission.

The React team got this so right with the release of hooks in 16.7. Dan couldn't have been any more right on the money in his delivery, which was laden with promises of no breaking changes and "don't feel like you have to rewrite anything."

When Zeit released cloud v2, about three months ago, they made v2 the default, which broke many development workflows and required me personally to spend three full days refactoring code and resolving Docker issues due to an obscure error that Zeit support had trouble identifying. The breaking change was a shock and a surprise. The explanation? You should be doing things this way anyway.

Perhaps that was true, but maybe not.

After going through all the trouble of converting to cloud v2, I reverted to cloud v1 because I could not set the min instances in cloud v2, to eliminate cold boot as an issue. Someone on this thread said cold boot is 200ms. That may be true for a particular application, but I received so many customer complaints about slow boots (5 seconds or more), I had to revert. Reverting has solved the issue.

As of today, I have a deprecation warning when I log into Now which says `Your account is using a legacy platform version. We highly recommend upgrading.` Or what? Are you going to make my unmutable application mutable?

This announcement about Now v2 is confusing, first of all because Zeit already released cloud v2. How are the versions related? Next, serverless may be the future of everything, or it may not work for some existing environments. The jury is out.

To someone at Zeit, please watch the React Hooks Intro video, and the parts of Dan Abraham and Ryan Florence in particular:

This is a great way to treat your stakeholders. Also, what helps is that React is going in a direction that focused on simplicity in design. I've experienced the opposite with using Now, but I still love the mission.

And if I want to keep my now 1.0 (or iphone 6 plus) because I prefer it, why do you want to take it away so badly? My complaint is not that you are making improvements. I trust your leadership in this space. You're obviously smart people. The problem is the behavior around deprecating earlier versions. It's herky-jerky and inconsistent.

Also, what is the mission now? That would be great to know.

andrewmunsell 282 days ago [-]
Are there any public plans for early access or an ETA to GA for features like scheduled jobs? That's very important to me, and since daemons aren't on v2, there's currently no way to do them internally to Now (I suppose you could have an external cron system GET/POST a URL every N minutes).

Congrats on the launch, it looks like a much more cohesive product now!

Rauchg 282 days ago [-]
We have been running an internal beta of the jobs feature, and we'll keep everyone posted!

We agree that invoking entrypoints periodically is a much better way to model most worker / daemon workloads than a "stateful event loop".

The commitment to our customers is that we won't phase out V1 until this usecase (and others!) are thoroughly addressed.

kylehotchkiss 282 days ago [-]
Yay! thanks for letting us know it's coming.
jancurn 282 days ago [-]
Apologies for sneaking into this thread. You can schedule batch jobs using Apify Actors and Scheduler. The actor can run any Docker image at times specified using a cron expression. Just have a look at:

Disclaimer: I'm a co-founder of Apify

kylehotchkiss 282 days ago [-]
I actually run a cron job ( on Now by setting the minimal scale to 1 instance. It's for a hobby project but it's been reliable enough for my personal needs.

The only tough part is having to delete the old cron instances when I push an update. Scheduled jobs (or some cron support) would be excellent.

XCSme 282 days ago [-]
Is it only for HTTP or does it also work with WebSockets?
Rauchg 282 days ago [-]
We are planning to give you very simple hooks and integrations to support WebSockets (e.g.: PubNub or Pusher).

For scalability reasons, it's likely you'll want to separate the code that deals with the "request-response" lifecycle from the realtime subscription.

jkarneges 282 days ago [-]
Fanout ( can translate between raw WebSockets and HTTP, and this works well with serverless backends.
gigatexal 282 days ago [-]
How do users do logging, just an agent in the app that sends data to a hosted logs platform?
Rauchg 282 days ago [-]
Logging is built-in. Just append /_logs to your deployment URL and you'll see all stdout and stderr of your underlying functions.
gigatexal 282 days ago [-]
Yup I eventually found that. Not for static websites and things but still really cool. I wonder if your app writes to std out would that be a way to format logs yourself?
garysahota93 282 days ago [-]
When they say "server-less" do they mean building apps with local data and nothing goes to the company, or do they mean the server aspect is all taken care of by them? I'm not the most advanced person and would really like to know more.


dreamcompiler 282 days ago [-]
"Serverless" is a marketing buzzword. It does not mean there's no server; it means that you don't have to manually provision a dedicated machine or machines to run your code. Instead, you provide some code (a "lambda") to your cloud provider that they will run when an HTTP(S) request comes in.

There is still a cloud provider which is manifestly a server; that's why this buzzword is stupid.

wilg 282 days ago [-]
This is why I call it "I Can't Believe It's Not Server!" instead.
jacobush 282 days ago [-]
No, but it's some kind of grease you put in the pan. I love your appropriation!
cjohansson 282 days ago [-]
+1 it’s like managed hosting but you have no control over the server since it ”doesn’t exist”
Rauchg 282 days ago [-]
The data can be stored anywhere. Now is only in charge of executing your code in the best infrastructure provider available.

It completely takes away the pain-points of setting up and managing the cloud infrastructure directly.

karakanb 282 days ago [-]
Kudos to the team to launch this new version, it looks really slick and my previous experience with Now was pure joy, it was really easy to get up and running. My only question with this approach is what is the suggested way to work locally? For example, the `Majestic Monorepo` sample is quite hard to run locally without Docker or a similar containerization technology, what is Now's approach to this? Would we be able to work offline with a project like this for example?
schnarfnark 282 days ago [-]
Looks cool. FYI – API Reference in the nav 404s:
PullJosh 281 days ago [-]
All of this serverless stuff looks great in theory, but I'm struggling to understand what the development environment looks like. Does every change in testing get deployed to zeit? That seems cumbersome. Otherwise, is there a nice way to test this stuff locally? (Also, how would one set up a database? On a separate service?)
avip 282 days ago [-]
I'd really like to see now supports non-HTTP deployments. That would be a killer feature. There's currently no raw TCP hosted solution offered, period. now could position themselves as the first mover there. The now interface is already amazing and a total joy to use.
oliverx0 282 days ago [-]
I was just browsing through the docs and got a rate limited error. (429)

From what I got to see, it is really cool!

js4ever 282 days ago [-]
Other similar wrapper to lambda functions but with a serverless sql database:
pier25 282 days ago [-]
This makes a lot more sense for me. Next step for total serverless domination is solving the database.


jahewson 282 days ago [-]
Has anyone here used Now extensively for dynamic sites? What was your experience?
kierenj 282 days ago [-]
Still haven't been able to get a simple now example running, issue open since Apr 2017 -
dev1789 282 days ago [-]
We loved you, Now. But we have to leave you, Now.

Serverless is still bad for modern web apps, that serve HTTP requests to real people..

I know that from practice, running Next.js & a GraphQL backend on AWS lambda.

Many devs think, that the infamous "cold start" problem is easily solved by:

"Warm" keeping (keeping a number of functions "warm" 24x7).

Cold start times getting below a certain level, the often cited < 300ms.

Both solutions currently fail to address the problem.

Let's say you keep 5 functions "warm". Of course the first users will not suffer from cold starts. But then, if you get a large number of almost simultaneous requests (a burst).

A certain number of users will get immediate responses, as they are handled by warm functions. But another set of users will experience far worse response latencies, as lambda functions were spun up, for every additional request coming in, while the warm functions were occupied.

While spinning up new servers is definitely slower, the total wait time in the classical server model, under heavy load, is equally distributed among all users of your app, so everyone sees it perform "a bit slower".

In the serverless model, on the other hand, some users will enjoy the immediate response time, while others will have horrible response times (especially taking additional latencies into consideration, see below). And they will leave your website within those 3 seconds.

For most commercially successful websites, increased latency is far more expensive, than some additional scaling cost. Everyone coming to our app is worth a lot of $$$, as we had to spend a lot of $$$ to get her here. One should consider that, when discussing scaling considerations. I think most people know the famous Amazon study, of 100 ms === -1% in sales. So if you try to argue about scaling cost, please keep your Google AdWords bill close.

Keeping functions warm also feels like the stone age. You're basically just pinging the thing all the time. People are seriously talking about pint-time-distribution-algorithms on Medium, i.e. how often, and at what time deltas you need to ping the thing, to hope that e.g. AWS Lambda is keeping a certain number of functions warm for you. Feels like a mysterious hack to me.

Now, let's debunk this naive latency argument. (I was sold on that, too before i ran a real life app on it).

Obviously you're not using a huge monolith, splitted your app in services, at least backend & frontend. Now, take a look at our user case.

Client →

→ Next.js Frontend (Lambda with Node.js)

→ GraphQL Server (Lambda with Node.js)

→ Prisma GraphQL Server (Lambda with the JVM)

→ Some managed database at GCP or AWS

See the problem? You don't have ONE cold start time. You might have THREE.

And don't forget, this is just additional time added to the:

Request handling time + database calls + data center latencies + client location latency + low client bandwith. This is why you can read comments here, that real people are waiting 5 sec for your page to load.

Also: The < 300 ms is just a best case scenario, that works for Node.js apps. Try to put a JVM app into the chain app (like e.g. Prisma GraphQL which runs on Scala). The results will be far worse.

That's also why the term "global deployments" is just a nice buzz word. Never forget to keep all your stuff in the same datacenter. You can't abstract away physics. You need to know or make educated guesses, who really owns these Zeit datacenters. If you read "Brussels", get your database at Google. If it's SFO, it might be better AWS..

If you're running GraphQL servers with subscriptions, or any other keep alive connections, beware of other issues with Lambda.

We were planning to use Zeit Now for our whole app infrastracture, besides the DB, which was supposed to be managed by Google or AWS.

We were doing serverless and running on AWS Lambda & Now before. And we were facing severe issues with AWS Lambda: Massive problems with cold starts, with user request bursts (e.g. someone posts an article about your app or after running a TV commercial)

While i try to "embrace" Lambdas that, i must admit, appeal to me from a dev experience viewpoint, i'm losing $$$ and FAR WORSE, providing a bad experience to my users.

282 days ago [-]
revskill 281 days ago [-]
How about websocket over lambdas ?
silverstrike 280 days ago [-]
Not possible. Their marketing that all 1.0 cases can be moved to 2.0 is disingenuous
anthonyshort 278 days ago [-]
They never said that all 1.0 use cases were covered now. They promised that eventually they will have use case parity and then they’ll deprecate 1.0.
nine_k 282 days ago [-]
Can't help but say that "Now 2.0" is one of the coolest headlines possible. (The scope of the article is, of course, a bit narrower than a new major release of the current reality.)