NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Augment, a GitHub Copilot rival, launches out of stealth (techcrunch.com)
simple10 10 days ago [-]
I'm always a bit surprised when startups come out of stealth mode with no product demo videos and only a press release. My guess is they're either wanting to raise more money before official launch, or just wanting to grab a press cycle ahead of competitor announcements to build the waitlist. Is there any strategic reason in 2024 to not include more details on the website?
mritchie712 10 days ago [-]
also, why be in stealth for such an obvious product? There are dozens of recently launched well funded competitors in addition to all the incumbents that have added similar features.
vasco 10 days ago [-]
If you're truly doing difficult things there's no need for stealth, you just need to protect the trade secrets and not issue patents. That's what stealth should mean. If stealth just means you haven't advertised and you asked your employees to not update linkedin, that's just you and your investors having an ego and pretending that you're doing cool stuff because you want to say the word "stealth". Like paying more for something that says "tactical" even without any purpose.
htrp 9 days ago [-]
Your next Linkedin Bio Company.

Tactical Stealth Alpha. Launching in 2025.

choppaface 10 days ago [-]
For LLM stuff, one upside to “stealth” is you might obfuscate your GPU demand. Now that the GPU market knows there’s another player with over $200m to burn, the market is going to want to take every last penny of that funding.
zaphirplane 10 days ago [-]
Who cares what a startup is doing, they want to rent cloud GPU or buy GPU it’s volume discounts

They aren’t getting a better or worse deal because they are in the autocomplete business or the avocado business

probably_a_gpt 10 days ago [-]
the investments in GPUs and datacenters is in the hundreds of billions, if the market even worked like this, that 200m would be a drop in the bucket
zaphar 10 days ago [-]
That isn't how markets work, like at all. Will another buyer with money to spend help drive the price up? Maybe, Maybe not. It depends on where that buyer spends their money and whether the supply forces that buyer to compete against other buyers for the same gpu. Supply and Demand curves drive the price.
hwinh 10 days ago [-]
Good point
silenced_trope 10 days ago [-]
Right? I wonder what this will offer?

Marginal improvement over GitHub CoPilot or Cursor?

abhyantrika 10 days ago [-]
Is there any product that allows me to plug in my own Open source LLM and run it locally?
Havoc 10 days ago [-]
Yep. Continue.dev with ollama. Works but the suggests tend to be short. Like a smart auto complete

Needs a gpu though not a big vram one given that using smaller models for this is better due to speed

btzs 10 days ago [-]
Tabby.ml also works well
SmellTheGlove 10 days ago [-]
Rubber duck does.
readyplayernull 10 days ago [-]
In case they failed.
wnc3141 10 days ago [-]
your hit rate as an investor/ entreprenuer goes up if your poor investments stay in the dark
remir 10 days ago [-]
Possibly to create hype.
jachee 10 days ago [-]
“We’re going to create hype by… *checks notes* …not talking about our product.”
mooreds 10 days ago [-]
They raised 277M. I hope they don't need to raise more money: https://www.augmentcode.com/blog/augment-inc-raises-227-mill...
krainboltgreene 10 days ago [-]
I ask that you look at the last 20 years of venture capital and tell me if they're going to be able to stop.
mooreds 10 days ago [-]
Sure, but not right now (since they just raised their B).

My guess is it was the second reason:

> wanting to grab a press cycle ahead of competitor announcements to build the waitlist.

simple10 10 days ago [-]
It's AI. They'll likely end up raising a lot more money if for no other reason than acquisitions and building a financial mote. The AI companies with the most money in the bank will likely win.
simple10 10 days ago [-]
Not to belabor the point, but for comparison sake, here's another launch on HN today for an AI tool[1] that can create frontend code. When I asked to make a landing page with Augment's value proposition, it created a decent looking site that included a demo video section and testimonials. I'm eager to try out Augment to see if it can create it's own landing page. lol

https://langcss.com/

dmaleknia 10 days ago [-]
I'm happy to show you the demo and discuss. Email me: dominic@augmentcode.com
yunohn 10 days ago [-]
I feel like this response is the exact opposite of what HNers expect - a random reply claiming they can only show a private demo.
danielbln 10 days ago [-]
Gives me the same energy as those"schedule a demo" forms that I close immediately...
RanHal 10 days ago [-]
Take one for the team and let us know your thoughts after...
ErikBjare 8 days ago [-]
I thought you guys were out of stealth? Just make a public demo already.
PedroBatista 10 days ago [-]
The "stealth" of today is like wearing a 1975 Stones concert tour T-shirt.

It's a both external and internal hype play so everybody knows we "are changing the World", they just don't know it yet.

It's whatever, doesn't mean anything. The only hook I found is the "we got money from Eric Schmidt and got some hyper-local celebs from big companies like Google" line.

All I see is a small constellation of "big" names and an uninspiring "fine-tuned “industry-leading” model". Not poo-pooing anything, I'll just wait for the actual product.

siva7 10 days ago [-]
budududuroiu 10 days ago [-]
Does anyone here get genuinely useful suggestions from Copilot?

I found it amazing for 1 thing: filling up verbose configuration like Terraform or Kubernetes manifests.

For code, it’s horrible, to the degree that I spend more time rejecting Copilot suggestions or having to extensively modify the ones I accept.

I stopped using it couple months ago

dotnet00 10 days ago [-]
I often have to write little scripts in Python or C++ to test/verify things. Copilot is great for handling the setup/boilerplate in such cases. It has made me spend a lot less time trying to modify and come up with ways to debug complex issues when I could just tear out the part I'm working on and have Copilot help me quickly fill in the holes enough to get the torn out portion running independently.

It's also great for picking up new packages to do 'simple' tasks like reading in a specific file format.

langsoul-com 10 days ago [-]
To me, it's a nice and fancy intellisense. Great for finishing the line or making comments.

Saves tons of time

everforward 10 days ago [-]
Yes, the comments. My ability to write comments has gotten significantly better because of Copilot.

It’s awful but in a helpful way. Like it will suggest “if $argument is empty, this function will return an error”, which is not true because $argument is an int, but it does do something wonky if you give it negative numbers so I remember to add that.

the_lucifer 10 days ago [-]
> It’s awful but in a helpful way. Like it will suggest “if $argument is empty, this function will return an error”, which is not true because $argument is an int, but it does do something wonky if you give it negative numbers so I remember to add that.

So like an intern/new hire asking what they think are basic questions but lead you to realize an edge case you missed?

drcongo 10 days ago [-]
Works well for me in Python, often writing code exactly how I would write it, even to the point of using my own custom packages.
haiku2077 10 days ago [-]
Yes, it successfully wrote significant portions of an open source Go project I maintain.
Wheaties466 10 days ago [-]
good to know. I knew it they are good with python in General but I had no idea they were good with go syntax.
probably_a_gpt 10 days ago [-]
yeah its fantastic for go and python

knowing how to trigger it properly is key though, just like all LLMs the context it has determines the quality of its output

margorczynski 10 days ago [-]
I hadn't tried Copilot but Continue + Claude Sonnet worked pretty well for me, really sped up development of the last hobby project I've done in Rust.
anonzzzies 10 days ago [-]
We are well over 60% of our team their daily code being written by LLMs, one of them being copilot. We have a lot of custom setup, our on vs code plugins and basically our own ide built over vscode, but it works very well. I believe we will reach 80%+ this year; our current bottleneck is speed (which translates somewhat to money).

But definitely our team would be hit pretty hard if we had to drop copilot now.

citizenpaul 9 days ago [-]
60% of the team uses LLM? OR 60% of their code is LLM generated?

Is the source the stats that it collects like you have accepted 60% of the reccomendatios ive noticed those are rather inflated. Ive yet to see one that deincrements its count even if yoy delete it rigjt after accepting, much less if you have to go back later and delete bunch of stuff it did subtly wrong.

lijok 10 days ago [-]
Woah, really? Would love to hear more. Do you have any blog posts or anything about this?
anonzzzies 9 days ago [-]
For some of the reasons discussed earlier on hn today (augment) we (not augment but similar) are in stealth mode. Currently we get deals that get us very far; that’s over with when we launch…
zarzavat 10 days ago [-]
This is not my experience at all. How are you using it? And in what programming language?

The technique to using Copilot (as with any auto-completion) is to keep typing until you get a completion that is equal to what you were going to type. So yes you have to keep rejecting things but the rejection doesn’t take any more effort than normal auto completion.

budududuroiu 10 days ago [-]
Had this experience across statically typed languages and not. I do admit if your type system is set up correctly, Copilot improves greatly.

For me I didn’t find the value I got out of typing 90% of the code I wanted for 10% completion to be worth the $20/month subscription.

zarzavat 10 days ago [-]
The reason I ask is that it’s going to perform better if the language you use is well-represented in the training set.

Much of the value of Copilot is that it helps you navigate new things that you are not familiar with. For example, I use scipy/numpy a few times a month, so I am competent but not an expert by any means. Copilot is incredibly useful to suggest ways of doing things that are more idiomatic than what I was planning to write.

Another way that it is useful is that it massively reduces logical bugs such as off-by-one errors that aren’t caught by typechecking, since it’s unlikely that both me and copilot will make the same off-by-one error at the same time. If only copilot makes an off-by-one error then I will catch it because the suggestion will be different to what I expected. If I am about to make an off-by-one error then copilot will suggest something contrary to my expectation and that forces me to reconsider what I was about to write.

mrjin 9 days ago [-]
I guess we are on the same page. It was able to get something right maybe 50% of the time, but it's getting worse as worse and I disabled it months ago.
Starlevel004 10 days ago [-]
When you remember most people here are Go or JS developers, it makes a lot more sense that they get use out of copilot and co.
cmpalmer52 9 days ago [-]
Yeah, I struggle some days to get my AI assistant to produce useful code for medium to hard problems. On the other hand, if I say “Write a Python program to recursively search through our code files and find or change something,” it will work 95% of the time.

My favorite AI success story was a module of c# code that did text overlays on photos. It was written by hand. When we started encountering some issues with the graphics package it was using, I asked the AI (JetBrains in this case) to rewrite the module using a different package. Tedious but relatively straightforward and the results were correct first try. That was what convinced me that (eventually) these type of tools will be great.

Meanwhile, I’m so sick of seeing “my apologies, I misunderstood your request…” after it generates suggestions with non-existent methods, bad logic, or missing parameters.

iLoveOncall 10 days ago [-]
Yeah my experience is that, like you said, it's good for configuration, and beside this it only generates utils it has seen before.

The thing is I don't need to rewrite a function I've seen before, I'll use a library or framework that anyways offers all of that.

celeritascelery 10 days ago [-]
> Even Copilot loses money, to the tune of around $20 to $80 a month per user, according to The Wall Street Journal

Wow. Does that mean that once the VC money dries up that will be the real cost of the products ($30-$90 a month)? That is much less accessible than $10.

richardw 10 days ago [-]
OpenAI/MS are also building a DB of code recommendations and user fixes (GPT writes crap, you fix it), which is an excellent asset. I expect in 5 years it’ll be a couple of sentences to replicate most of whatever startups we create while using Copilot, but now with the fixes built in.

I’ve used it constantly for a while now and assume all files in the project are basically training material to replicate what I’m doing. Annoying, but that’s what I get for not using local or a service from a privacy focused provider. Although I am using Claude in my scripts to distribute my footprints.

I’ve added my name to the wait list. This seems like a candidate for a company that intends to respect IP and that’s worth something.

dudus 10 days ago [-]
The cost is supposed to go down as LLMs become more efficient and hardware efficiency goes up with time.
faeriechangling 10 days ago [-]
I fail to see what advantage they're getting from subsidising users for the meanwhile, and I definitely agree, these cloud AI solutions are subsidised.

Still the value gets worse for these cloud models all I have to do if I'm using say VScode is install a different extension and configure a few settings and bob's your uncle I'm using a different AI. What benefit exactly do companies get from blowing hundreds of dollars subsidising my usage again? Where's the moat? Where's the lock-in?

kevindamm 10 days ago [-]
patterns of behavior (people will often keep using what they're used to, or assume their favorite model gives better answers regardless), free QA and training data, free advertising

Even when the switching cost is lower than what you describe, people tend to keep using the product they had gotten used to.

sdesol 10 days ago [-]
In business school you learn that you can't compete with "as good as". If you are using Copilot and another company comes along that is "as good as Copilot", there is very little incentive for people to switch.
onthecanposting 10 days ago [-]
This is a lot like the Amazon strategy. At one point you probably had a few alternatives in mind for online shopping. Much less so now.
twobitshifter 10 days ago [-]
High costs are due to inefficient chips and overpowered models. It’s like we discovered tnt and now we’re using it to weed a garden. The purpose built chips and tpus and pruned models will be cheaper. You can do speech to txt on $10 chip. https://www.nature.com/articles/s41586-023-06337-5

Imagine this chip for JavaScript.

zeroonetwothree 10 days ago [-]
If it's, say, $50/month, that will be easily affordable for businesses and not really most other users. (I'm assuming it's providing nontrivial benefit, of course)
esyir 10 days ago [-]
It's probably expected that the cost goes down with time. Also, what VC money? Copilot is Microsoft.
ipsum2 10 days ago [-]
WSJ is wrong. Autocomplete code models are small (5B param) and very cheap. A single inference is roughly $0.000001.
acchow 10 days ago [-]
Copilot uses GPT4 now
ipsum2 10 days ago [-]
I mentioned autocomplete, not the chat service.
SgtBastard 10 days ago [-]
GPT4 isn’t a chat service, friend.
acka 10 days ago [-]
I think the point that ipsum2 is trying to make is that Copilot's chat service and its code completion service could be using different models, which is not uncommon for coding assistants.

Continue[0] for example can use up to 3 different models in a session: a large model such as GPT-4 Turbo for chat and code QA, a smaller low latency model such as StarCoder2-3B for code completion, and yet another model such as all-MiniLM-L6-v2 for generating embeddings.

[0] https://www.continue.dev/

rayxi271828 10 days ago [-]
While it's clear that AI coding assistants still have a long way to go in general, I wonder how yet another startup, presumably based on the same technology at the back, is going to differentiate itself in a material way?

There are only so many axes along which improvements can be made in this domain, aren't there? What are the bottlenecks that, if solved, will produce a true breakthrough, exactly?

Doesn't the current approach have an upper limit that's inherent in the whole architecture, nay, even the whole foundational theoretical aspect of it?

Would love to hear from anyone who has come across one AI coding assistant that's obviously head and shoulders above everything else. I've tried Copilot, CodeWhisperer, and Ghostwriter.

mike_hearn 10 days ago [-]
I tried Copilot and another one whose name I forgot (TabNine?). They didn't stick. What stuck is the new single line model JetBrains shipped recently in IntelliJ:

• It runs locally.

• It is extremely fast.

• It is not overly aggressive. It only shows up sometimes and when it does I nearly always accept its suggestion.

• It's a native part of the IDE so doesn't interfere with autocompletion.

I gotta say, JetBrains nailed it with this one. Single line mode is a much better way to think about AI driven autocomplete. That said, it's also obviously much more limited. It helps and is pleasant, but isn't revolutionary.

The other AI assistant I use sometimes is aider.chat, it's an open source tool. You type what you want and it generates git commits for the requested change. This is clearly the direction to go in long term and is much more revolutionary, but there are still problems with it.

> What are the bottlenecks that, if solved, will produce a true breakthrough, exactly?

There's a lot of "low hanging fruit" here (not that low), because the big AI labs aren't focusing on code gen AI right now. I can think of at least 4 or 5 paper-sized research directions to make improvements. The challenge is that right now the only models any good at coding are GPT-4/Opus level models, so doing anything with them is impossible unless you're working at a tiny number of labs. I don't work there so the "obvious" ideas I have aren't of much use. That leaves explorations of whether you can take a great open source model and boost its coding skills a lot. I think you probably can, especially if you have the resources to do a Mixtral or a Llama 3 yourself. Most smaller labs seem to be focussing on general performance optimizations rather than trying to reach GPT-4 level skills so the number of labs capable of doing these experiments should go up with time.

From talking to people there's also a general fear I think that there's not much point working on some sorts of ideas because what if OpenAI come out with GPT-5 and everyone gets taught another Bitter Lesson (http://www.incompleteideas.net/IncIdeas/BitterLesson.html)? Better to just wait things out and see where model capabilities stabilize. From all the noise about licensing deals it's starting to sound like we're data constrained even for the companies that are pushing the boundaries of what fair use means, so maybe we'll see appetite to do smarter things with coding models next year.

Deukhoofd 10 days ago [-]
That's just the client-side implementation, Jetbrains' Copilot client does the exact same, as that's how Jetbrains' code assistant API is implemented. These startups would need to differentiate themselves on a server level, their clients are limited by the IDE.
Mystrl 10 days ago [-]
He's talking about something different (https://blog.jetbrains.com/blog/2024/04/04/full-line-code-co...). I also found this significantly more comfortable to use than copilot. Completions show up faster, don't try to overpredict (when I was trying out copilot it would often try to suggest large blocks of completely useless code) and doesn't cost anything extra.
paradite 10 days ago [-]
My bet is that the future of software engineering will "shift left" towards product requirement completion rather than focusing on writing raw source code. More emphasis will be on how to translate requirements into software, without worrying too much about writing code itself, kind of like how we don't worry about assembly code or compilers now.

May a new form of user interaction more tailored for product manager or product engineers would be the trend?

Anyway, I am building my own AI coding tool towards that direction with a new user experience focused on task instead of code: https://prompt.16x.engineer/

mulmen 10 days ago [-]
With emerging technology both capital and product follow the Monte Carlo method.
jerrygoyal 10 days ago [-]
give Cody a try.
hipadev23 10 days ago [-]
I look forward to anything better than Github Copilot. The implementation in VSCode is horribly slow, constantly injects a lightning bolt icon near my cursor, and interferes with intellisense-style LSP language hints/docs.

I’ve found it mostly helpful for saving time on obvious boilerplate code, but the annoyances above plus the occasional inexplicable errors it introduces in said boilerplate code, I’ve just cancelled the entire thing.

cal85 10 days ago [-]
I agree with you on Copilot’s clunky integration in VS Code, but I never really thought of it as slow, maybe because I have no other code assistant AI to compare it to. Are others faster?
onel 10 days ago [-]
I read that for supermaven speed is one of the big pluses. That and the 300,000 context. Haven't tried it though
russellbeattie 10 days ago [-]
If anyone from Augment reads this, the careers@augmentcode.com email address (from the "Apply Now" button on the careers page) is bouncing with a "does not exist" error.

Edit: Seems to be fixed now!

dalmaer 10 days ago [-]
Sorry Russell! Fixing it now...
russellbeattie 10 days ago [-]
Hey Dion! No worries! Just thought you guys would like to know sooner than later. Good luck with the launch!
Havoc 10 days ago [-]
I really want groq to compete in this space.

It’s the obvious application for their ridiculous speed. Quality of suggestions sure but waiting breaks your flow

mepian 10 days ago [-]
I wonder if the name was inspired by Doug Engelbart's system: https://dougengelbart.org/content/view/155/
krainboltgreene 10 days ago [-]
The only thing more hilarious than these articles is how funny it will be when the companies either close because they can't possibly solve "Most companies are dissatisfied with the programs they produce and consume; software is too often fragile" as a problem set or they don't make enough money to justify the $1B valuation (and rising!).

The writers of this article don't even bother doing any introspection into these claims or ideas. Is it important to know that programmers have been using AI for the last 50 years? I'm sure it isn't.

peter_d_sherman 10 days ago [-]
>"Practically every tech giant offers its own version of an AI coding assistant. Microsoft has GitHub Copilot, which is by far the firmest entrenched with over 1.3 million paying individual and 50,000 enterprise customers as of February. Amazon has AWS CodeWhisperer. And Google has Gemini Code Assist, recently rebranded from Duet AI for Developers.

Elsewhere, there’s a torrent of coding assistant startups: Magic, Tabnine, Codegen, Refact, TabbyML, Sweep, Laredo and Cognition (which reportedly just raised $175 million), to name a few. Harness and JetBrains, which developed the Kotlin programming language, recently released their own. So did Sentry (albeit with more of a cybersecurity bent).

Can they all — plus Augment now — do business harmoniously together?"

Well, we don't know!

We do know however that there are a lot of AI coding assistants, and there will probably be many more in the years to come...

The excerpted text above -- makes for a reasonably good list as to what's available as of the current date...

Related: https://github.com/sourcegraph/awesome-code-ai

htrp 9 days ago [-]
>In 2022, Ostrovsky and Guy Gur-Ari, previously an AI research scientist at Google, teamed up to create Augment’s MVP. To fill out the startup’s executive ranks, Ostrovsky and Gur-Ari brought on Scott Dietzen, ex-CEO of Pure Storage, and Dion Almaer, formerly a Google engineering director and a VP of engineering at Shopify.

The other problem is they may have built a solution that was state of the art at the time but the SOTA has passed them by.

A lot of AI codegen/completion solutions were inspired by general transformer (Bert or GPT2) approaches without any type of chat tuning models.... and then OpenAI dropped ChatGPT. So what was a revolutionary demo in 2022 is now just a couple of CoT prompts in the latest openAI api call.

chubs 10 days ago [-]
The article says "Augment is using fine-tuned “industry-leading” open models of some sort" - correct me if i'm wrong (happens often! :) ) - but does this mean that they're using some open model from huggingface, wrapping some UI around it, and voila they're worth a billion dollars? Hmm.
andrewstuart 10 days ago [-]
Even OpenAI can't make a better coding copilot than ChatGPT 3.5.

I'll be surprised if this one is better.

linsomniac 10 days ago [-]
Just to clarify, this is augmentcode.com , not augment.co ("Personal AI assistant")
ChrisArchitect 11 days ago [-]
aborsy 10 days ago [-]
I haven’t used copilot. Isn’t it collecting a lot code from the Internet and combining to make auto complete suggestions?

I just don’t see how it could think and write new code like a human developer would.

Any feedback from copilot users?

esalman 10 days ago [-]
The magic is in how it combines the code based on context.
electriclove 10 days ago [-]
Is Eric Schmidt still relevant?
khazhoux 10 days ago [-]
In what sense? He funds many companies, so in that sense he is still relevant. He can opine on how he grew the biggest company that launched today’s web, so in that sense he is still relevant. And let’s not forget, you know his name but he doesn’t know yours, so in that sense also he is relevant.
aborsy 10 days ago [-]
He has some government roles apparently!

Care to clarify?

fdschoeneman 10 days ago [-]
They collected my email when I signed up to try and then told me I couldn't try it. Unethical.
bix6 10 days ago [-]
Anyone tried it?
s09dfhks 10 days ago [-]
Friend of mine works there and got me access to the VScode plugin. I’ve only used it a handful of times and I have not used copilot. It makes ok suggestions. That’s about all I can say. I don’t expect it to do my job for me but it’s not useless
xenospn 10 days ago [-]
Same as copilot, then?
khazhoux 10 days ago [-]
It's hard to imagine how a company raised a quarter-billion dollars off the same idea everyone had --and many already acted on-- after we all saw chatgpt in 2022. There's how many copilots already?
flanked-evergl 10 days ago [-]
Is there any other service that does what GitHub copilot does? I have looked, and they are all just ChatGPT like things branded copilot, maybe I have not looked hard enough.
geepytee 9 days ago [-]
Lots of them. Check out https://double.bot for example.
tom_vidal 8 days ago [-]
What on earth is the point of doing “stealth”? This just sounds to me like a way to burn money and scale before finding product-market fit.
moneywoes 10 days ago [-]
how did they raise so much?
heyoni 10 days ago [-]
Eric Schmidt. It’s in the title.
0xfedbee 10 days ago [-]
Just another blow of air into the AI bubble. Can’t wait to see all these stupid companies and their VCs go bankrupt.
brevitea 10 days ago [-]
Hope it's not another Devin situation...
neilv 10 days ago [-]
First Eric Schmidt colluded with executives at other large employers to suppress software developer wages and job mobility. https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_L...

Now he's trying to get into the business of reducing software developer headcount and professionalism, through copyright-theft-laundering LLMs.

10 days ago [-]
vbezhenar 10 days ago [-]
Hopefully copilot will compete. I mean, it's not exactly new product already and it still:

- tries to insert triple backquote in my code for some weird reason

- does not do continuous code review, highlighting likely errors and typos

- does not do any code edit or delete, only code insert

I want AI to work along with me. I feel that Copilot has huge potential, but this limited UI keeps that potential untapped.

joshstrange 10 days ago [-]
Interesting, and I was about to say "I hope this rival will compete with Copilot". Copilot is still winner from everything I can tell. I've tried a few open source ones with local models and they aren't as good yet, I've tried paying for Cody but I keep coming back to Copilot.

If it matters: I don't use the chat features of any of these plugins. It's just not muscle memory, I use Copilot as fancy auto-complete and I open ChatGPT for longer or more detailed or specific code snippets. I also use Claude 3 Opus but I'll probably drop it, I just like ChatGPT more, the UI and the results.

That said, I agree with many of your points as things I would also like to have.

sqs 10 days ago [-]
Why do you go back to Copilot after trying Cody? (I’m the Sourcegraph CEO, and we make Cody.) We added Claude 3 Opus editor chat and also support GPT-4 Turbo. Curious, why don’t you use in-editor chat? If you find those models helpful, then having in-editor chat with code context seems like it’d be useful to you.
joshstrange 10 days ago [-]
The Cody plugin (Intellij) threw multiple errors for me, over multiple days, with a paid account. When it didn't throw errors it either didn't suggest anything or suggested something I thought was worse than Copilot. I even toggled both plugins a few times and tested with Copilot and it always gave me what I expected. I tested over a span of about 5 days of coding. I kept feeling limited and wanting to switch back to Copilot, which, to be fair, has had a handful of outages/errors but not this many in this short of a time.

I'm not sure if you write code, and I don't want to assume either direction, but if you don't and/or don't use these tools, let me share my experience.

I now have a "sense" for when Copilot will have a good suggestion and I pause while writing code waiting for it to spit something out. Copilot earned this "respect", for lack of a better word, because it has shown me time and time again that it "knows" what I want. With Cody, I would reach a point where I wanted to wait for a suggestion, I'd wait, and I'd wait, then my focus is broken because I'm wondering why I'm not getting a suggestion, I look in the bottom status bar, and Cody has some red symbol on it. When I did get something back from Cody, I just felt the results were equal to or worse than Copilot. I only paid for Cody to get Claude 3 Opus and that's all I tried. In addition to bugs that caused Cody to not work, I had multiple plugin crashes reported to me by my IDE. Here [0] is one of the bugs I saw, it was the last one I got before I uninstalled Cody and canceled my Pro subscription.

The last thing I'll say is that the purchase/activation process was not good. I signed up for an account, installed the plugin, logged in, realized if I wanted Claude 3 Opus I'd need a "Pro" account, bought a Pro account, went back to the plugin and had to log out and back in for it to realize I was a pro member. A "check membership status" button/option would be nice. I looked in the preferences and everything on the sidebar panel I could click on. I also tried opening/closing the panel and some other things I thought might trigger an account update call, no dice. It's whatever and I did eventually get it working (by logging out/in) but I wouldn't want that to be the user experience for someone who just gave me money so I thought I'd mention it.

[0] https://github.com/sourcegraph/jetbrains/issues/1306

sqs 10 days ago [-]
Thanks for the feedback and I’m sorry you had an awful experience here. We’re working on improving Cody for JetBrains a lot (our VS Code extension was first and JetBrains is in beta). This is helpful feedback and I hope we can earn your trust again in the future and make you love Cody.
joshstrange 10 days ago [-]
I’d love to love Cody, I like the ability to choose newer/different models. When the JetBrains plugin gets better I’d try it again.
faeriechangling 10 days ago [-]
I'm pretty unimpressed with copilot's limitations and am really hyped about the potential of running a good quality local AI that just continuously comes up with ideas.

I can even imagine setting up a CI/CD pipeline with regression testing and let an AI try out ideas for a few days and seeing what it comes up with. What we have now, glorified autocomplete, is some weak tea.

10 days ago [-]
RanHal 10 days ago [-]
The triple quotes are probably because it uses chatGPT, which is optimized for chat and not code
10 days ago [-]
nsagent 10 days ago [-]
> Ex-Microsoft software developer Igor Ostrovsky believes that soon, there won’t be a developer who doesn’t use AI in their workflows.

I'm really curious about this. I honestly don't see a case where I would use a coding assistant and everyone I have spoken to that does, hasn't been a strong coder (they would characterize themselves this way and I certainly agree with their assessment).

I'd love to hear from strong coders — people whom others normally go to when they have a problem with some code (whether debugging or designing something new) who now regularly use AI coding assistants. What do you find useful? In what way does it improve your workflow? Are there tasks where you explicitly avoid them? How often do you find bugs in the generated code (and a corollary, how frequently has a bug slipped in that was only caught later)?

rpbiwer2 10 days ago [-]
I was skeptical at first - tried the copilot beta, got annoyed by it, and quickly turned it off. Later I tried it again and haven't looked back.

For the most part it's not that I'm ceding the thinking to the machine; more often it suggests what I was going to type anyway, which if nothing else saves typing. It's especially helpful if I'm doing something repetitive.

Beyond that, it can save some cognitive load by auto completing a block of code that wouldn't have necessarily been very difficult, but that I would've had a stop to think about. E.g an API I'm not used to, or a nested loop or something.

The other big advantage that comes to mind is when I'm doing something I'm not familiar with, e.g. I recently started using Rust, and copilot has been a major help when I vaguely know what I _want_ to do but don't quite know how to get there on my own. I'm experienced enough to evaluate whether the suggested code does what I want. If there's anything in the output I don't understand, I can look it up.

> Are there tasks where you explicitly avoid them?

Not necessarily that I can think of, but after having copilot on for a little while it's gotten easier to tune it out when I'm not interested in its suggestions or when they're not helpful.

> How often do you find bugs in the generated code (and a corollary, how frequently has a bug slipped in that was only caught later)?

90% of the time I'm only accepting a single line of code at a time, so it's less a question of "is there a bug here" and more "does this do what I want or not?" Like, if I'm writing an email and gmail suggests an end to my sentence, I'm not worried about whether there's a bug in the sentence. If it's not what I want to say, I either don't take the suggestion, or I take the suggestion and modify it.

If I do accept larger chunks of suggested code, I review it thoroughly to the point where it's no longer "the AI's" code -- for all intents and purposes it's mine now. Like I said before, most of the time it's basically the code I was going to write anyway, I just got there faster.

dewey 10 days ago [-]
That’s what I thought too in the beginning but that’s because the demos were always about writing a comment telling the Copilot something like “// write a function to sort this array”, but in reality it’s just a better auto complete for me where I just write the regular code like “func Delete(“ and it auto completes the parameters and the boring crud code for that function.

At the current time it’s not that magic for me but more a small speed up with smarter auto complete.

In future iterations when it knows your whole code base, everything you see on the screen, your microservices and how they are connected and manipulated multiple files at the same time that’s when it would become more interesting to me.

pennomi 10 days ago [-]
I’m a strong coder. AI assistants can effectively be considered a really smart autocomplete. It simply saves time to insert the characters I was going to type anyway. If it suggests something other than what I want, I just simply don’t press tab.
spott 10 days ago [-]
Mostly copilot is a nice autocomplete. Sometimes it writes what I would write and then I don’t have to type it out.

Sometimes it helps when I’m writing code in a domain I don’t know. It can pull in a library function I wasn’t aware of for example.

It isn’t always right and sometimes hallucinates, but usually static analysis notifies me when the library function doesn’t exist or the signature is wrong, and then I have to go back and do the work I was going to have to do anyways.

The key I think is that most software isn’t writing unique code. We might write little nuggets of unique code and then glue it together with a ton of boilerplate. And LLMs are great at boilerplate.

doctor_eval 10 days ago [-]
I think I’m a strong programmer, but my learning style seems to be inquisitive and it’s a perfect match for GPT cos I can ask a million questions. So for me, GPT has been like working with someone much more knowledgeable about my platform (I haven’t done a lot of modern front end stuff before), but seemingly not as strong on architecture or domain.

This combination has meant that I’ve done in about 3 days what I thought would take me 2 weeks. And I’ve enjoyed the shit out of it too.

doix 10 days ago [-]
> What do you find useful? In what way does it improve your workflow?

It's a better auto complete. When I'm writing markup it has surprisingly good suggestions for labels/placeholder/whatever.

> Are there tasks where you explicitly avoid them?

I don't use it to fix bugs/errors. Occasionally I try and see what it comes up with. it has never once successfully fixed anything in my entire history of using it.

> How often do you find bugs in the generated code (and a corollary, how frequently has a bug slipped in that was only caught later)?

Since it's just typing what I'd expect to type myself, I probably have the same bug rate as before. I haven't seen it insert off-by-one errors (yet). That's probably the most likely one I can imagine missing.

10 days ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 09:36:23 GMT+0000 (Coordinated Universal Time) with Vercel.