johnramsden 373 days ago [-]
Pretty amazing what can be done with bash, and it looks very impressive. I can't believe all the features that were able to be put together to make using the library essentially give bash all the features of a full-featured language.

However, at the same time I kind of wonder what the use case would be for this. With all the extra syntax it would basically be equivalent to learning a new programming language, at which point it just seems to make more sense to use languages that have these features baked in.

While it looks like a really cool project, I could never see myself using it, as usually when I want to use a shell scripting language I want full portability, and end up writing in sh. When I want to write something more advanced I use real programming language.

brobdingnagians 373 days ago [-]
The only use case I can think of is an extremely constrained environment where bash is the only way of interfacing with it. I tried making a really basic router do some more complicated processing once because I couldn't figure out how to get any other programming environment on it; but later realized it was worth just buying better routers that could be modified... or figuring out how to anyways.
alerighi 373 days ago [-]
For what I understand, this framework is compatible only with Bash, and not with other POSIX shells. Tipically on a router and other embedded platforms you find a minimal shell, like ash, that is POSIX-compatible but lacks of all the GNU extensions that Bash have and are used by this framework.

You find a full version of Bash only on GNU/Linux (not even in other UNIX OS like BSD), and there you can install the interpreter for the programming language of your choice

devonkim 372 days ago [-]
A coworker of mine wrote a Hadoop job scheduler (Oozie replacement) scheduler in bash because our defense network’s packaging restrictions were so awful. I suspect that once this library shows up in DoD projects that even bash will wind up becoming replaced / neutered
oblio 373 days ago [-]
Or BOFHs ( in banks and other "secure" environments.
shawn 373 days ago [-]
No way. Having to program a router in bash seems fun! Send me your old router!
nisa 373 days ago [-]
checkout OpenWrt, you have a posix compliant ash shell from busybox and lot's of shellmagic in the base. You can parse json from bash and interact with the router via an system-bus where you can write plugins in posix-sh that can be called via json-rpc from the browser. - example script:
Annatar 373 days ago [-]
Yep, for the same reason why I program in the original Bourne shell from 1977. Runs everywhere without modifications including bash, since it doesn’t use any shell-specifics, so I don’t bother writing bash-only code.
mrybczyn 373 days ago [-]
As a long time bash user (and abuser), I will just point you to this document, which distills a lot of wisdom: "If you are writing a script that is more than 100 lines long, you should probably be writing it in something else"

e40 372 days ago [-]
Perhaps the "should" is soft, but I definitely disagree. There are lots of tasks where BASH is the best tool and more than 100 lines is needed.

I have build scripts of complex systems that are >1000 lines. They are easy to understand and maintain and I cannot image doing it in another language.

js2 372 days ago [-]
Here's what I find happens. I'll work on a script in bash because it really seems best for the job up front and quicker than coding in Python (say). The script starts to work its way up in size, 50 lines, 100 lines, 1000 lines. Then inevitably there's just "one more thing" that bash really isn't suitable for, and I wish I'd started with Python in the first place, because now I have to rewrite a 1000 line script.
antod 372 days ago [-]
Exactly my experience, but I find my tolerance threshold more like 100 lines these days.
msla 372 days ago [-]
It's like the wisdom about functions longer than a page or a screen: If it's more than a screenful of actual logic, breaking it up will help you continue to understand it. If it's a long function which is essentially a switching yard, like the central dispatcher in a bytecode interpreter, breaking it up won't make it any easier to understand, because it's pretty simple as it is.

Bash is good for switching yard code, where you're gluing a lot of program invocations together with a few variables and a little bit of logic. It can certainly do other things, but that kind of code is the least risky when it becomes big.

Cthulhu_ 373 days ago [-]
The (possible) problem is that by writing it in something else you're adding a dependency on either a runtime (python, ruby, js, etc) or on a compiler pipeline (c, go, etc), so I can understand an aversion to switch to one of those.
jchw 372 days ago [-]
Bash is already quite a dependency in itself. It's nowhere near as universal as a plain Bourne-compatible shell, and some systems come with fairly old versions of Bash that are unable to cope with some of the more complicated code.

If folks are being honest with themselves, they will find it's significantly more sane to rely on a Python interpreter being available than writing Bash scripts. Go binaries have less dependencies than any Bash script, if runtime dependencies are an issue.

Carpetsmoker 372 days ago [-]
Adding to this, it's not just bash you have to worry about: also grep, wc, head, cut, and all other external utilities you're calling.

There are actually quite a few incompatibilities between different implementations and versions of many of those utilities. Even if you try your best to restrict to the POSIX standard it's quite easy to "accidentally" use an extension, leading to breakage when someone runs it on Linux distro $foo or macOS.

Dependencies is a big problem in shell scripts, much more so than most other environments.

gjstein 373 days ago [-]
I suppose I only use Bash infrequently, but when I do it is normally in an effort to make scripts that I know will run on my various machines for data processing and the like. I am hesitant to use a framework like this (as exciting as it seems) because I fear it would mean managing more dependencies across my different development machines. What are people using Bash for that would require enough complexity that a framework like this becomes a godsend?

Edit: typos.

unixhero 373 days ago [-]
Building things.

It comes with logging and unit testing. What's not to like.

There are a lot of people who does not know any other languages, believe it or not.

nerflad 373 days ago [-]
> There are a lot of people who does not know any other languages, believe it or not.

This is the only reason I can see to use this over perl (which it seems to be heavily influenced by) or python. Then again, learning how to use these higher level language features would probably require equivalent effort to just learning a beefier scripting lang such as those.

Also: As far as hack value, what a great project. I don't mean to bikeshed.

omeid2 373 days ago [-]
I am not too sure about this. The only thing that makes bash tolerable for me is that it is consistently the same kind of ugly across all the platforms.

Still, one has to worry about and take care of programs that they depend on in their bash scripts and make sure that the correct version is installed and so, that is hard enough, I don't think adding new syntax and package management is going to help Bash, only make it more complicated and error-prone.

muterad_murilax 373 days ago [-]
What programs do they depend on?
chme 373 days ago [-]

Bash is a programming language you don't want to use, but end up using anyway. Any effort to make it a bit less clunky and maybe even a bit saver is welcome.

hnzix 373 days ago [-]
I was rockin' a fancy zsh config on my devbox until I started doing more ops work and traipsing around foreign servers. My muscle memory became useless and back to bash it was.
Symbiote 373 days ago [-]
I'm considering adding Zsh to the base install of all my servers, since it's so useful to be able to write a script with something like

for the most-recently-modified file, or

to get a variable in lower case. I do deploy a .bashrc with some shell aliases, which are the most annoying things to miss when I'm used to using them locally.
tomc1985 373 days ago [-]
I've never understood how people can insist on super fancy custom configs, it is so normal to burn through servers. Just learn the defaults, life is way easier...
TeMPOraL 373 days ago [-]
It is so normal for sysadmins. Not for developers. Maybe it's more normal now, but that's a recent development.

(These days, I hope I can just develop my super fancy Emacs config in one place, and use TRAMP to work with remote machines through an environment customized for me.)

kokey 373 days ago [-]
Some people think we don't need sysadmins nowadays. It's all in the cloud nowadays so we only need developers and some of them can do devops. Then you end up with systems where people all log into the same user with a shared SSH key, who has sudo root access, and access everything by IP address because who needs DNS, where there's no automated or regular patching because we just roll a new release with a new systems image because everything is ephemeral, or I mean will be ephemeral when we are done with it and in the mean time perhaps someone will run ansible on it again if they are brave since no one has run it again since system build. Oh and we are on call because we get outages since the disk space fills up since who needs to monitor and automatically manage that sort of thing if we never use persistent storage.
tormeh 372 days ago [-]
>shared SSH key

Wait, you use keys? Wow, man. I prefer good old-fashioned passwords, the same one on all servers, of course. And keep it memorable.

snlnspc 373 days ago [-]
Too real.
Fiahil 373 days ago [-]
Fun fact: This depicts word for word my previous job (company size around 50 people)
tomc1985 372 days ago [-]
I'm sorry those guys don't know how to build a working system...
tomc1985 373 days ago [-]
Throughout my career in dev I have always had to have a presence on sometimes dozens of servers. If something can't be fully customized in a few keystrokes it is not worth the effort, especially if whatever is being customized gets thrown away.
oblio 373 days ago [-]
You'd be surprised how different dev careers can be.

In Windows land it's quite frequent that the dev never leaves Visual Studio. In Eclipse/Java land something similar, even though often the deployments would be on Linux, for cost saving reasons, the devs would all be on Windows.

I should write a blog post or something at some point about the differences in dev careers. Enterprise software development is very different from embedded software development which is different from web development for the mass market which is different from mobile development, etc.

bradknowles 372 days ago [-]
And then those Java developers have to try to interface their code to Jenkins to get it to do CI, and the sysadmins have to deal with the Java guys who just click on the "Run" button in Eclipse and don't know basic things like what directory the code needs to execute from.

Don't get me wrong, those Java developers can be great guys, but when the GUI-only people have to interact with the CLI for the first time, all sorts of problems can manifest themselves.

mxuribe 373 days ago [-]
I would look forward to your blog post! Seems an interesting topic.
leksak 373 days ago [-]
Link it here
viraptor 373 days ago [-]
I understand this as 3 categories of deployment size:

- you've got 1 server you visit rarely so don't care about the server shell

- you've got <10 servers you care about and try to replicate your shell config (because you use it daily)

- you've got >10 servers and you almost never log into them because the logs, metrics and deployment are exported to a centralised system... so you don't care about the server shell

chme 373 days ago [-]
It depends on your setup.

If you have a central home directory that is mounted on every server via nfs and zsh is installed everywhere or if you have setup some (semi-)automated file syncing/deploying and develop just on one machine that is setup correctly, then why not?

The main reason for programming Shell/Bash for me is, because its available on the tiniest of Systems, like initramfs, or routers. There aren't many options for scripting. But I also don't know if Bash Infinity makes sense there. So while this project looks interesting, I don't see me using it anytime soon.

tomc1985 373 days ago [-]
It's the same reason for favoring defaults. They are universal, and everywhere, and lets one get to work immediately. Constantly porting over customizations just so a spoiled butt can work comfortably gets tiresome. It's even easier than fussing about trying to automate the whole thing...

Bash is intentionally scoped to be small. The language seems almost intentionally difficult to grok for large projects. There are very few legitimate use cases for large bash scripts, in my opinion, and frameworks only make that harder to see. Javascript also started scoped small, but it was everywhere, and look at it now...

TeMPOraL 373 days ago [-]
> There are very few legitimate use cases for large bash scripts, in my opinion, and frameworks only make that harder to see. Javascript also started scoped small, but it was everywhere, and look at it now...

Maybe the meaning of "large" depends on familiarity, but somehow, a big part of UNIX/Linux is a pile of large bash scripts...

zaarn 373 days ago [-]
zsh is quite fun to use tbh and it's worth the pain of not having it on some remote boxes IMO (and I'm working on getting most of those on zsh via Ansible)
tr0ut 372 days ago [-]
I certainly get it. Zsh and the like offer some really nice functionality missing from standard bash. However that muscle memory is lost. Zsh etc. certainly cool for your own tinker box. Not good when dealing with lots of disparate systems. I don't even like to use a lot of aliases because of this.
Myrmornis 373 days ago [-]
Same here with zsh. I switched my laptop shell to bash since working all the time in the interpreter at the shell prompt makes me better able to write bash scripts. I don't particular want to deal with the minor syntactic differences of zsh, or the totally different fish, seeing as I have to know bash well.
themodelplumber 373 days ago [-]
Same here with Fish
ndesaulniers 373 days ago [-]
This is how it starts; once someone figured out php could do more than just #include html fragments, Facebook built an empire with it.

Frankly, I'm amazed Bash can do any of this. I'm happy to use a fullscreen terminal all day, and come up with too-clever unix pipelines, but damn do I hate writing anything in Bash, especially when it comes to control flow.

kevan 373 days ago [-]
I try to avoid writing bash scripts so I hopefully won't use this, but it looks a very well thought out piece of software. Fantastic readme with rationale and examples for every feature.
stephenr 373 days ago [-]
Bash specifically as Bash isn't a great target runtime for cross-platform/distro scripts.

If a program is simple enough (or the programmer determined enough) to write it in Bash, POSIX shell is a better option, as it's well defined, and well documented what does what, and how.

Relying on Bash specifically makes things much more complicated for not much benefit.

IshKebab 373 days ago [-]
Python or Go are better options. Much more robust.
anacrolix 372 days ago [-]
Python maybe. Writing scripts in Go is a nightmare.
phaemon 370 days ago [-]
Can you articulate why?
Sir_Cmpwn 373 days ago [-]
If POSIX shell isn't good enough, you should be using a proper programming language. Encouraging this sort of thing is harmful.
_jal 372 days ago [-]
Agree that accepting the POSIX limitations is healthy if doing so works for the project; disagree with the prescriptive tone.

There are lots of things people "should" do, and lots of reasons why they don't; sometimes, they're good ones. So how's this for a "should"?

Prescriptive advisors should be very cognizant of the dangers caused by people following such advice without understanding why they are doing so. (Hint: the people who need such advice usually don't understand it.) Additionally, such advisors should accept responsibility when those they give advice to do really weird things while trying to follow it.

athorax 372 days ago [-]
Could you clarify? Unless the systems you are using require POSIX compatibility, why limit yourself? I would agree that using a proper programming language for anything more than fairly basic scripts would be a better option, but I don't necessarily see the harm.
Sir_Cmpwn 372 days ago [-]
Well, for one, limiting yourslef is a good exercise in restraint. POSIX shell is simpler than the approach shown here. Using a simpler approach keeps your code more readable and understandable, and using POSIX shell as a baseline means a wide variety of people can understand what your code is trying to accomplish.

And you never know, you may someday find out that you want to move to an OS which doesn't support bash, or you start to distribute your software and the complaints roll in from the BSD users trying to port it to their OS, or bash makes some backwards-incompatible changes to some arcane behaviors this tool relies on, and since bash isn't standardized you didn't know until it was too late.

I wrote a blog post about this, if you want to read more:

tomc1985 373 days ago [-]
The thought of a Bash framework scares me. Next thing you know someone will write React in it...

Just freakin learn perl or python! Do not let this beast grow any larger.

chriswarbo 373 days ago [-]
> Just freakin learn perl or python!

For calculations? Sure; but that's not really what bash is for.

Shells are excellent at invoking and managing subprocesses and piping. Python (not sure about Perl, I've not really used it) is terrible at those things.

Take a simple, very common piece of bash code like `foo | bar`. How might we do this in Python? Maybe we reach for the builtin `subprocess` module:

    import subprocess
    foo_output = subprocess.check_call(['foo'])
    bar        = subprocess.Popen(['bar'], stdin=subprocess.PIPE)
Except that this isn't a pipe: it will run `foo` to completion, storing all of the output in memory, then call `bar` on this data, e.g. `foo_output=$(foo); echo "$foo_output" | bar`. This is unsuitable for long-lived processes (e.g. if `foo` is long lived and `bar` is meant to be logging its output), or if there is a lot of data (e.g. `foo` is generating GBs of text and `bar` is summarising it, like `wc -l`).

OK, maybe instead of `check_call` (which is blocking) we make `foo` asynchronous. Note that we can't use `foo.communicate` to get its output, since that would also block. What if we just shuttle data between the two manually, line by line (urgh)?

    import subprocess
    foo = subprocess.Popen(['foo'], stdout=subprocess.PIPE)
    bar = subprocess.Popen(['bar'],  stdin=subprocess.PIPE)
    while foo.poll() is None:
This appears to work, especially when testing with small amounts of data. Yet it's actually a timebomb, since it will deadlock when the subprocesses' pipe buffers fill up (this is why the documentation tells us to use `communicate`, except that we can't since that's blocking ).

It's at this point that we move the data shuttling into a separate thread:

    import subprocess
    import threading
    foo = subprocess.Popen(['foo'], stdout=subprocess.PIPE)
    bar = subprocess.Popen(['bar'],  stdin=subprocess.PIPE)

    def shuttle():
      while foo.poll() is not None:

    thread        = threading.Thread(target=shuttle)
    thread.daemon = True
Now we've got a pile of code mixing multiprocessing with multithreading, in a domain known to have deadlocks, with hand-written hard-coded line buffering.

At this point, I'd say just freakin learn bash!

kamaal 373 days ago [-]
>>not sure about Perl, I've not really used it

Perl is great at these things.

In fact you talk shell native tongue in Perl.

    use strict;
    use warnings;

    my @output = `your_command | your_another_command`;
    foreach my $line (@output) {
        chomp; #removes new line
        $line = $_;
        if ($line =~ /your regex goes here/) {
            #if match, use $1, $2... to get the groups matched
            #your buisness logic goes here
tomc1985 372 days ago [-]
Well, technically, with grave quotes, you can talk shell native with Ruby or Python as well.
antod 370 days ago [-]
Ruby yes, but not in Python right?
kamaal 370 days ago [-]
Python isn't that great as a scripting language in general.

Its really more like a glue language, a step below Java.

tomc1985 372 days ago [-]
I agree, subprocess is a pain in the ass. But there are other ways at invoking things, like with grave quotes.

Personally I think control structures in bash are too convoluted and easy to get lost in.

Also if you are running long-lived high-output processes and you want your program to be interactive, then of course you have to do multithreading shenanigans. At that point you're going to need a proper event queue or something at the very least. The first use case I can think of (logging dmesg -w) would definitely not work using a naive approach

Too 371 days ago [-]
I haven't run it but I'm quite sure you can do this to get direct pipe without having to run foo to completion before starting bar:

    foo = subprocess.Popen(['foo'], stdout=subprocess.PIPE)
    bar_output = subprocess.check_output(['bar'], stdin=foo.stdout)  
Though the idea with python is that you shouldn't have to do as much piping as you do in bash, no need for xargs, cut, find, etc. Just process the data inside python code. So the pain of having to use two lines instead of a | isn't as big as many people claim.
gvalkov 371 days ago [-]
I think you're making this more complicated than it needs to be. For simple things, it's ok to just:

  from subprocess import run
  run('foo | bar', shell=True, check=True)
chriswarbo 370 days ago [-]
As that argument's name suggests, that's invoking a shell (or whatever the `shell` env var is set to, IIRC).

Shells are excellent at invoking commands and piping, as I said.

Doing things like piping with Python instead of a shell is indeed more complicated than it needs to be. That was my point ;)

Annatar 373 days ago [-]
Both Perl and Python are way more complicated than shell programming, and in all my decades of doing so on the command line, I’ve yet to find a problem which a full blown shell + AWK couldn’t solve. And that combination is still simpler to program in than Python or Perl. Had you grown up on a real UNIX, you’d have never written what you wrote.
kamaal 373 days ago [-]
Perl is far more than anything you can ever do in Bash, even the basic things. Then there is also the famous Tom Christiansen Essay on C-shell:

You can even do Lispy things in Perl. You can do functional stuff:

You can do unicode regexes in Perl:

You can do large file processing in Perl: ,

Perl's file and string manipulation capabilities has no match.

If you use awk you have to think in the line paradigm. Perl gives you not just that but a lot more. Perl regexes are by far the strongest of their kinds programming language out there(first class entities).

In fact the whole reason why Larry Wall invented Perl is at some point in time you max out what you can do with things like awk and sed.

Annatar 373 days ago [-]
I spent 3.5 years debugging and maintaining Perl code. For a living. I have formal education in programming applications in Perl.

And I stand by what I wrote about shell + AWK, especially AWK, any day of the week.

By the by, I can do unlimited number of things in a shell program, things no other programming language can do, because I can call any other program from it. Apart from assembler, shell is the second most powerful tool because of that characteristic.

kamaal 373 days ago [-]
Sure you can curl a data end point and then parse the resulting JSON/XML using regexes, cut, tr in bash+awk+sed, but your code will break even on a slight change of output rendered.

These sort of things are just the beginning. People will be surprised how hard it is to parse something like a csv.

You start to begin discovering limits when you start doing things like error handling. Writing slightly complicated regexes, or if you need a little complicated code written relying on if/for more often.

More everyday use cases that come to my mind are use of things like Data::Dumper, qw, open/while<>/close paradigm code, arrays, hashmaps, grepping over large lists, unicode, heredocs, handling binary data etc.

Nothing to take away from Bash. But its really to glue together a small bunch of unix commands in progression. If you are doing anything more than a 100 line program, you are better off with Perl.

Annatar 371 days ago [-]
“but your code will break even on a slight change of output rendered.”

I see you still don’t get it. I wouldn’t as an experienced shell programmer construct a JSON/XML parser; I use libxslt for XML and I compiled jq for parsing JSON. It’s the UNIX way.

The Zen of this eludes you still. Think deeper.

kamaal 370 days ago [-]
>>The Zen of this eludes you still.

Seems like a blessing in disguise in this case, if such zen exists.

I should use shell+awk+sed+libxslt+jq and all that, instead of using Perl + XML::Simple?

Annatar 369 days ago [-]
Indeed, for that XML::Simple would be an operational nightmare to deploy and maintain (being fused into Perl’s tree), and your code would be unreadable and unmaintainable by anyone else, for such is the nature of Perl.

And then there is the UNIX way, with both jq and libxslt being reusable for other data without having to write a dedicated program. sed by the way is unnecessary if one has mastered AWK.

There are too many things you have not even begun to consider yet. You could start with taking those developer-convenience glasses off, and putting the system engineer glasses on.

tomc1985 372 days ago [-]
I had a career writing perl code for a few years too.

Believe it or not I also like shell for small tasks.

AWK was super powerful but I never had much use for it (preferred sed when I absolutely needed that kind of fuckery).

tomc1985 372 days ago [-]
Perl sits in a happy medium between bash and full-on programming. Yes bash makes piping really easy but IMO that is about it for advantages.
blumomo 373 days ago [-]
What is the value for an OO bash language except that people who are familiar with bash can now do OO? Or asked differently, why should I prefer Bash Infinity over a CLI written in any existing OO language, let's say Python, which comes which a much bigger library and is already heavily tested?
User23 373 days ago [-]
This is a glorious absurdity. I'm seriously wildly impressed. Excellent work!
cranjice 372 days ago [-]
This is an impressive piece of work. However my use case for bash is simple, concise and portable scripts.

I don't look forward to encountering (read trying to fix) this in production.

garettmd 372 days ago [-]
Keyword for me is portable. The reason I (and I assume most people) use bash is that it's ubiquitous. This framework wouldn't be. Not that it's not impressive and looks fun to try out. But I can't see a lot of use for it in practical terms.
Alir3z4 373 days ago [-]

I love how it took the bash to much higher level. I Bash is something you'll end up using, accepted or not. It's available by default and is the best (only?) thing to connect all the dots together when working on Linux machines, either local or servers.

This framework is lovely and indeed something that will make my life much easier.

Great job, great job!

bloopernova 373 days ago [-]
This is amazing, thank you to the author for creating this wonder!

I can think of maybe half a dozen examples in the past few years where this would have been useful. Not all IT stuff is kubernetes or plain containers, so a well-defined bash-based language is something I can see certain shops embracing.

Great work!

kabes 373 days ago [-]
Nice work. Although it seems everytime I need to use bash (actually quite regulary ) I forgot everything about it and need to look up all the basics again. Maybe I don't try hard enough, or maybe its inventor had a totally different logic then I do.
mar77i 371 days ago [-]
Bash was invented many and many times over, it even says so, being the "Bourne Again Shell". So what you have here is a classic unix sh with extra bells and whistles and some workarounds to shoehorn it onto mordern environments that allow newlines and spaces in file names. For example, it reinvents the [ ... ]/test builtin as [[ ... ]] which bypasses some shell behaviors which can be surprising at times.
cjohansson 373 days ago [-]
Wow, this project really changed my view of bash. It looks more like a ordinary programming language when used in this framework
Annatar 373 days ago [-]
Why are people trying to shove everything into frameworks? Any time you use a framework, you have to play by someone else’s rules, limited by that person’s imagination, experience and insight. If you need something that the framework doesn’t provide, well tough noogies, you have to write it yourself! Might as well write it myself anyway with only the parts I need, keeping it lean and mean (sorry but after so many decades of dissappointments, I don’t trust other people to keep it lean and mean for me).

Looking at the modules, this is not how an experienced shell programmer would write code; functional — yes; object oriented — never. Part of the reason those of us who write in the shell do so is to escape the horror of meaningless, kilometers long stack traces and unhandled exceptions, allowing us to keep our code small and fast.

And bash, couldn’t the author have picked up ksh93 and built upon a powerful, POSIX-compliant programming language? Why aren’t we as an industry striving to be better, instead of propagating de facto fashion trends?

hyperpape 372 days ago [-]
For everyone who's saying "if you need to write a real program, don't use Bash", you're not really addressing the issue. This is clearly an attempt to make a real language out of what you have in Bash. Repeating platitudes about real languages doesn't help. The question is, how well does this framework achieve that?

That said...I have my doubts:

Carpetsmoker 372 days ago [-]
They are not "platitudes" though, but real-world practical concerns.

To give a specific example, even with these extensions you won't be able to handle NULL bytes in your shell script. So if you ever want to store, say, a PNG image in a variable you're screwed.

This can be a real problem. I once wrote a shell script to deal with some email data, which worked brilliant right up to the point we had to deal with emails which contained attachments that aren't base64-encoded (an external provider sent it to us that way), at which point it all came crashing down. It took me ages to discover why emails were being mangled and had to rewrite the entire stuff in Python.

It was only used in our development environment and not production, so no permanent damage. But still a waste of my time.

There are many other cases where you may want to deal with binary data.

hyperpape 372 days ago [-]
What you just said is a great observation, but it's not what I was criticizing. I was criticizing people who say "Bash is bad, so I hate this new thing." But since this is a project to build something good on top of Bash, that doesn't really address the issue.

What you're (very helpfully) pointing out is that is a case where it's just a bandaid, and Bash's flaws shine through. I suspect there will be more. But other comments are just saying "Bash bad", and those are irrelevant.

crehn 373 days ago [-]
Cool stuff, very impressed!

Bash is nice since it's available on many systems and is a slightly better superset of sh. However Bash is a large, ugly beast. There are a million edge cases, inconsistencies, obscure options, counter-intuitive intricacies and minute differences between versions and other shells. There's already a ridiculous amount of completely unnecessary complexity to it. I wouldn't want to increase that surface any further.

In practice, I try to keep stuff minimal, transparent and close to defaults. Usually that means -x and POSIX compliancy when possible.

ejanus 373 days ago [-]
I am learning Bash via and I am enjoying. I didn't know about Bash unit test(bats) until recently. Bash Infinity would help me to learn more... My shout out to the creator .
jacobush 373 days ago [-]
Next up: a package manager and virtual environment.
erikb 373 days ago [-]
And a JIT-compiler + VM so it can be used on all environments.
jwilk 373 days ago [-]
erikb 373 days ago [-]
I would argue, though, that having some people actually taking it seriously and starting an implementation of that is awesome, part of the joke, and certainly sometimes even the source of some really cool stuff.
twic 373 days ago [-]
Hmm. It probably wouldn't be too hard to build a Truffle runtime for bash ...
jacobush 373 days ago [-]
Well, with Emscripten, how hard can it be to get this running on Node? And access the DOM on the browser?
kureikain 371 days ago [-]
This is so great. After 5 years of SRE career, I love Bash so much. Bash to me is about glue tools together to do something. I think JSON is what make me struggle with Bash the most. JQ solves it to some certain extent but still a PITA.

CI, HealthCheck, Utility scripts(especially about extract data from log) are all valid use case for this.

cjhanks 372 days ago [-]
If you write a working program in BASH, it will probably work until the day you die. That's why I use BASH for any script I want to be able to forget about.

Even long scripts, I have written multi thousand line programs in BASH separated into different modules.

Some of this could be helpful, but the non standard syntax kind of defeats the purpose.

ElijahLynn 373 days ago [-]
I didn't easily see this in the but it is in the project description.

Nice syntax highlighted example here >>>

Teknoman117 372 days ago [-]
Makes me think of bash on balls, a web platform written in bash -
artellectual 373 days ago [-]
Actually bash is extremely powerful. If you do serious ops work or any kind learning bash goes a very long way. I work a lot with bash and this lib just looks very exciting.
samat 373 days ago [-]
Very cool from hacker perspective, but I'd ask 'why not use some of the languages you've mentioned, instead of inventing a new one?'
swsieber 373 days ago [-]
This looks cool. It looks like it wouldn't be too hard to write a preprocessor to online the includes (to get down to a single file)
ape4 373 days ago [-]
In the system System V init days most of the init scripts included (via the dot command) the same standard files.
jazzyjackson 373 days ago [-]
brb spending the next month learning bash infinity


jwilk 373 days ago [-]
To be a "standard library", it would have to be shipped with bash itself. It is not.
373 days ago [-]
muterad_murilax 373 days ago [-]
If only such a thing existed for Windows Batch.
mxuribe 373 days ago [-]
There's git for windows [] that comes bundled with git bash...While i have not done complex things in git bash, so far it feels like good ol' bash...but on windows. I wonder if this bash infinity could run in git bash...?
y4mi 373 days ago [-]
thats just cygwin without its dependency manager...
wenc 372 days ago [-]
Batch is a much more limited language than bash. Powershell on the other hand is a full blown scripting language and you can do some impressive things with it.

Also, on Windows 10, you can run bash directly through Windows Subsytem for Linux. It gives you a full native Ubuntu environment through a binary compatibility layer (so it runs at near native speeds, non virtualized)

navait 372 days ago [-]
Are there similar frameworks for ZSH?