With that said, I don't want to be completely negative, what you are doing is a damned great idea and I think just about all of us can get behind these types of processes, as someone who is learning about solidity vulnerabilities (blockchain @ berkeley) right now it is kind of funny this post comes around today. I have kind of been fretting about how easy it is to write terrible smart contracts and I kept thinking, well how the hell does NASA do this? What about air frame manufacturing? That is how I want to develop my dApps, I don't want to deploy and hope, you know? I hate that feeling, not just because I don't want to lose anyone funds, but because I want to be proud of my code and use it for good.
I am really looking forward to checking this out, I can't see it at work but will check it out as soon as I get home! I think this is really important work you are doing. I think just about everyone who is responsible in this space knows doing contract development in a web development mind frame is dangerous and stupid. I even heard someone say the other day "agile smart contract development", like NO! I had an old drill sergeant tell me about combat "Slow is smooth, and smooth is fast", well turns out it applies to smart contract and blockchain development as well.
I care about the major complicated projects succeeding, e.g., 0x, Golem, Augur, Aragon, to name a few. They can go the DAO route, or I can do my part in helping. Help me get in touch with more projects if you're connected.
Open sourcing right off the bat will have the opposite effect. If it works at scale, we will be pitting the attackers against the developers. I would rather get this tool in the hands of developers first. I welcome other people developing the same :) - better yet: I've already developed a version, come help me make it useful for the community.
Having checked out your website I am pretty excited about what you are doing. How are we to ever build true value delivering services if the underlying architecture is not secure? This has been my biggest worry with the entirety of the space. That we will move too fast, lose focus on security first, and end up building gigantic infrastructures on shaky foundations thus turning into the very incumbents that we are aiming to displace.
With the idea I am planning on, I will most certainly need your services. I need for all parties to 1) understand the benefits of trustless sytems, and 2) be able to trust the trustless system in an environment where each party is relatively conservative in that it dislikes and discourages disruptions. Security will be of paramount importance.
Anyways, thanks for making this, its great to see such efforts being made in the space. We need less hype, and more maths, as I like to say.
I wouldn't be surprised to find some of the big hacks have been project developers themselves moonlighting.
This was pretty mind-blowing for me. (I haven't really looked into program synthesis so I may be slightly more susceptible ;) There was one part of it that seemed problematic from my (perhaps flawed) understanding, though.
So I guess the idea is we have K+D = I (and this is roughly equivalent to 'D(K(s)) = I(s)' where 's' is a string to compress), so if we come up with a proof of 'I'—which would be a path through theorem space starting from I'm not sure what axioms—and we know 'I' is equivalent to 'K+D', and we already know the value of 'K,' then the value of 'D' is implied.
My question is what is the equivalent here to the subtraction operation that would allow us to say D = I - K? It seems very surprising to me that K and D would be structured in such a way that their own proofs would necessarily have any relationship to the proof we derived for I, so that such a subtraction would work so cleanly...
What you do is solve `\exists D: \forall arr: D(K(arr)) = arr` where `arr` is an arbitrary length array/bitstream. See paper . There is also really old code you can browse at the bottom of my personal page linked in profile (doesn't compile anymore coz of deprecated dependencies.)
The arithmetic analogy break down when solving. So you're not explicitly computing `I - K`, but instead letting the constraint solver pick values that satisfy the equation (hence SMT solving.) Also, you are absolutely right that there is no reason to have their structure be related to identity: But, the "damage" done by `K` is encoded in the state at the interface to `D` and `D` better undo that state to get to identity, which is what the solver would exploit.
 Path-based Inductive Synthesis for Program Inversion http://saurabh-srivastava.com/pubs/pldi11-pins.pdf
Ah—makes sense. I wonder if attempts at going in the other direction have been made, and initially constraining the structures of the known parts so that you could do something like a subtraction to get the unknown (probably a crazy dead-end idea I realize :).
> But, the "damage" done by `K` is encoded in the state at the interface to `D` and `D` better undo that state to get to identity, which is what the solver would exploit.
Interesting. So because we know it's an inversion scenario beforehand we can modify the solver to not be so general, and only bother considering solutions which modify the state in an inverse manner to our original program...
Cool. I'll be keeping this at the back of my mind to see if any ideas come up. Thanks and good luck!
What I had so far is at https://github.com/zitterbewegung/Belmod
[ Edit: Your idea is fantastic, btw. If you can find use cases where the size of contracts is small, it'll be amazing to have synthesis from test cases. ps: Do borrow some ideas from the way Sumit solves Flashfill. ]
My system is closed source, partially because it is computationally heavy and I'll spin up a server fleet for analyses, and also because I don't want it to be used for attacks, i.e., limit use to mostly the original developers of projects.
Connect with the email on the website. I'll look at your code a little later.
The key distinction is building a synthesizer, not just a verifier. These systems are hard to use, and synthesized code is the key to making them accessible. Watch the demo videos. See if you agree.
On the verifier side: QSP is a protocol/marketplace for connecting people needing audits, with human auditors or free open source tools (Oyente and Mythril) . My verifier is more comprehensive and future-proof at the expense of being compute intensive, as discussed on my FAQ .
 FAQ: "How is this different from Ethereum's other tools ?" https://synthetic-minds.com/pages/faq.html
In the "dApp" world, the top five by market cap are:
- EOS, a competitor to Etherium. It's a platform for more dApps.
- VeChain Thor, which pivoted from a RFID tracking system for cargo to a "mobile wallet".
- Tron - "The platform is still in its infancy as is expected to reach the “Eternity” by 2023. In its final stage, the platform will allow content creators to directly monetize their content by creating their own token over the TRX blockchain."
- Omise - this is a "white label solution for mobile payments". It doesn't really use "dApps", it just used a token sale to raise money.
- Icon - a blockchain to integrate other blockchains.
None of these actually do anything.
If this is so great, is YC itself buying ICO tokens? Didn't think so.
In this case, we've known Saurabh for years (this is the second time we funded him) and I've been talking to him about Synthetic Minds for close to a year. If you believe, as he and I do, that the story on crytpo has barely started, then what he is building will have a huge impact on the future state of the world.
That's a bet we'll take whenever we can.
I am working on a simple premise: The DAO (perpetually running, globally accessible, democratic kickstarter) would have been a good project to see succeed. It failed not because it was a bad idea, but because we rushed into it without tools like this one. Augur, Aragon, 0x, Golem, Filecoin are other projects that might be worthwhile. Also ZCash (doesn't have smart contracts yet, but you should really look into the work Ariel Gabizon and company are doing.)
You wouldn't want to write an linux kernel code without a compiler, would you? You shouldn't write immutable code without formal verifiers. Agree?
And the fourth item on your list is wrong. It's OmiseGo, not Omise. Omise is the company behind OmiseGo, which is a project aims to develop an Ethereum (Plasma) side-chain to host a decentralized exchange.
Some people have made millions, some brand new technology is created after a unique technological advancement that solves the byzantine general problem, even YC is investing --- yet old people cry "it is nothing but a fad!", so much that it becomes comical, like the black knight in the Monty Python (especially when it comes from someone confusing Dapps and Tokens)
Eventually, you guys will have to admit you were wrong. A bit like people who thought horseless carriage were a fad. It's ok, we all get things wrong someones. No bad blood.
Technology is here to stay. Many people are ideologically opposed to the libertarian roots of cryptocurrencyes. Many others regret not investing in time, or selling too early.
I get that. But maybe it's time to move on.
It doesn't even solve the byzantine general problem...
Is it not true that program synthesis will work, as long as a "contract" isn't too large or designed adversarially? Because if that weren't true you'd have a debugger for all circuits and programs (suitably restricted in computing power).
Yes. It should work. There's a good amount of work to be done though. :)
I don't follow what you mean by a "debugger" in this case.
Compare this to the alternatives of paying $20,000 - $500,000 for manual audits.
(I'm not convinced formal verification with automated tools can actually substitute for manual audits. I think they're great for augmenting manual audits, but what if there are failure conditions I didn't think to assert as impossible?)
If you want a more precise cost analysis, shoot me an email for your specific contract.
And you are right about audits. Except a caveat: The system will by default check for global properties as well, e.g., no double spend. Again, look at the examples in the FAQ. They had no assertions in them. Yet, we find errors.
Hm no I can't. What's the complexity of your algorithm?
I make multiple SMT queries. SMT solvers work well in practice, but they are solving a problem which is between NP-Complete to Undecidable (when quantifiers are involved ).
Also, we are using a very hand-wavy word definition "complexity of code". This is not the complexity-theory complexity, but instead the types of operations used and the looping/control structure. (In hindsight, I should have used a different word.) So 40 lines of multiple nested loops with multiple hashes being taken and multiple dynamic dispatches over addresses, will take much much longer to analyze than 40 lines of straight line arithmetic.
My tool attempts to guess the estimated runtime midway through the analysis, but there is no real way to accurately estimate the cost upfront. That is why I was saying that if it cost ~$200 for 20-50 LoC then if you accept all caveats then under wildly simplifying assumptions the 500 LoC might take 10x-25x of that cost.
For bugs: The demo videos on the FAQ page show the kinds of bugs it can find, e.g., the one that crashed the DAO, and ones that simpler tools will be unable to find. Of course you are right, the holy grail is finding bugs that escape both human audits and other tools. Haven't run it on very large codebases yet. Working on it!
You have a great idea there. Run with it!