Introduction to the Python-Ethereum ecosystem

Introduction to the Python-Ethereum ecosystem

This post is targeted at developers who are interested in getting started developing on Ethereum using python.

Is python the right choice?

It's important to know what you are planning to build because Python may not be the best choice for certain projects.

If you are planning on building a user facing application that will run in a browser then Python may not be the right choice for you. DApps that run in the browser are likely to benifit from a javascript toolchain so you may be better off looking into Embark or Truffle.

One of the powerful features of a DApp that is written as pure HTML/JS/CSS is that it can be completely serverless. Choosing python as part of your web toolchain may anchor your application in the web2 world.

Outside of the browser however, Python and Ethereum work very well together.

Base Layer Tooling

The pyethereum library by Vitalik Buterin has been the base for most of the tooling that I've written in the Python ecosystem. If what you are looking to write deals with low level EVM interactions then this library is a great place to start.

Interacting with the blockchain

When you want to actually interact with the blockchain from python you'll probably want to use JSON-RPC. There are a few python client implementations to choose from.

These two libraries provide a client for interacting with the JSON-RPC service over either HTTP or an IPC Socket respectively. They can both act as drop-in replacements for each other as they expose the same API over a different transport layer.

Interacting with Contracts

To interact with contracts on the blockchain, you'll need to encode and decode the inputs and outputs according to the Ethereum Contract ABI. There are low level tools available for doing this using either ethereum-abi-utils. This library provides the abi encoding and decoding functionality available from within the pyethereum library as a standalone library with fewer dependencies.

This method of interacting with contracts is a bit clumsy and verbose, so you may want to take a look at the ethereum-contract library. It comes with a python class that can be used to represent an ethereum contract that has callable methods for each of the contract methods which are exposed via the contract ABI.

Testing

Lots of people have used the ethereum.tester module that is included within pyethereum to write tests. This module exposes a python based EVM which works great for testing EVM interactions within your python code.

For a slightly higher level tool, you can use the ethereum-tester-client](https://pypi.python.org/pypi/ethereum-tester-client). This library exposes a drop-in replacement for either the RPC or IPC based clients which interacts directly with the ethereum.tester EVM. This client can also be used with the ethereum-contract library to test your contract code.

Populus ties it all together

All of these tools serve as a foundation for populus. Populus is a python based framework focused on contract development and testing. Populus's command line interface provides tools for compiling, testing, and deploying your contracts.

Pull Requests welcome

All of these tools are open source and available for use today. Please feel free to reach out to me directly, ideally either in a Gitter channel, or via github issue if you have any problems with any of these tools. And as with any of my projects, pull requests are welcome. Please ensure you inclued the obligatory cute animal picture with any pull requests.

Three new python libraries

I spent today extracting functionality from populus into three new python libraries.  All of these have been extremely useful in my contract development and I hope they can help others now that they can be used independently of populus.

I would be remiss to not mention the pyethereum library by Vitalik as well as the eth-testrpc library from Consensus.  The ethereum-abi-utils library borrows heavily from the internals of pyethereum and ethereum-tester-client similarly borrows from the internals of eth-testrpc.  Thanks to the authors of these libraries for letting me stand on their shoulders.

Ethereum Contract

Provides an interface for Ethereum contracts as a python object.

Ethereum ABI Utils

Low level tools for ABI encoding and decoding.

Ethereum Tester Client

This is a drop in replacement for the python RPC or IPC client which interacts with the pyethereum.tester module.

v0.7.0 Deployed

Version 7 of the Ethereum Alarm Clock service has just finished being deployed. 

  • Full source code for this release can be found on github
  • The latest documentation is available here.
  • A new canary contract has been deployed to 0x24d76e09a1b82bfb7fdefa3fb0df1bab01e5b824
  • Full instructions on how to verify the deployed bytecode available here.
  • Beta release of the scheduling client with compatibility with this new version of the service available on pypi.

This version introduces the following new features.

  • Protection against stack depth attacks.
  • You can now specify a required gas amount for the executing transaction.
  • You can now send ether with a scheduled call.
  • Major reduction in the default price of scheduling.
  • You can now schedule calls as soon as 10 blocks in the future.

This version also includes a change to how the default price of a scheduled call is determined based on market information.  I'll go into detail in a later post about how this mechanism works.

I will be considering version 6 deprecated.  If this is problematic for you please reach out to me.

v0.7.0 Deployment Progress

Then next version of Alarm is being deployed now.  As part of this deploy I will be taking all of the scheduling clients pointed at the v6 service offline to prioritize v7.  I've verified that the only calls scheduled are mine so I don't expect this to effect anyone.  If it does, please reach out to me.

This also means that the current reliability canary is likely to die within the next two hours.  This canary contract represents 20 days of call scheduling reliability.

Stay tuned for info on what's new with version 7.

v0.7.0 in testing on the testnet

The next iteration of the Alarm service has been deployed to the testnet @ 0x26416b12610d26fd31d227456e9009270574038f.  Main net deploy within the week if I don't find any major issues.  This is a pretty major release so I want to be sure that everything is running smoothly.

  • Code for this release can be found in the master branch on github. https://github.com/pipermerriam/ethereum-alarm-clock/
  • The scheduler client for this version is available in the 7.0.0b4 beta release of the ethereum-alarm-clock-client on pypi.
  • Some basic example contracts can be found
  • Documentation for this release is available here: http://docs.ethereum-alarm-clock.com/en/latest/

Features included in this release are:

  • Stack depth attack protection
  • Ability to specify the required amount of gas for execution.
  • Ability to send ether as part of execution.
  • Experimental dynamic pricing model for default Payment
  • 26 new function signatures for call scheduling which should fit most any use case.

Feel free to reach out to me if you have any questions.

A Dangerous Design Pattern

Update 2016-01-25: This issue has been addressed within the Oraclize it service but this post serves as a write-up on a design pattern that should be avoided by any service.

The abstract solidity contract that Oraclize currently recommends using to integrate with the service exposes contracts to being drained of their entire account balance.  This attack is possible because of two key components within the usingOraclize it contract code.

Source Code Here: https://github.com/oraclize/ethereum-api/blob/3a598bc9b119504a12b929116f95296640388b07/oraclizeAPI.sol

First, the code uses a resolver contract.  Each time a call is made to the oraclize it service, the contract first queries the resolver contract to find the current address of the Oraclize connector contract.

Second, each query to the oraclize it service calls the getPrice function on the Oraclize it service, and uses the return value of this function as the ether value that is sent to the Oraclize service to pay for future gas costs.

This sets up a situation where your contract is allowing another contract to specify the ether value that it will send with a transaction.  Any contract which has used the usingOraclize it contract to integrate with the service, or re-implemented this same logic within their own contract has placed the ether balance of that contract in the hands of the operators of the Oraclize it service.

I'd like to be clear about something.

  • I don't believe the Oraclize it operators have any malicious intent.
  • I love what Oraclize is doing.

I do however think that it is poor form to encourage a pattern that exposes users to this level of risk. 

When thinking about security, it's important to think about the specifics of what you are protecting and who you are protecting it against.  In this case, the target is the private key which protects the Ethereum address 0x0047a8033cc6d6ca2ed5044674fd421f44884de8.  The thing this key protects is the total account balance of every contract which has implemented this pattern.  I don't have figures on hand but it's reasonable to expect this amount could be quite large given a bit of time and some success on Oraclize it's part in getting people to integrate with the service.

If the private key is on Thomas Bertani's personal computer then this paints a large target his back.  History has shown that a motivated attacker can penetrate most any system given some time and I expect this is now different.  The immutability of contract code exacerbates the situation.  The window of time for this attack is effectively infinite.  At ANY point in the future if this key is compromised, so are your funds.

The usingOraclize contract needs to be changed to add real protection to the contracts that use it.  Here are a few ideas.

  1. Add bounds checking to the return value of the getPrice function to ensure that it is within reasonable limits.
  2. Change the resolver contract to require multi-sig to change the address.
  3. Require users of the usingOraclize contract to implement their own logic for setting the value sent with each query.  Provide safe examples on the API docs.

This has been a difficult issue to write about because I've been torn between not wanting to cause bad press for a service that I really like and feeling obligated to report what I see as a significant security flaw in code that people are likely to copy and paste into their contracts.

Update 2016-01-25: I reached out to Thomas Bertani at Oraclize it on Sunday.  I wanted to be sure that he both had a chance to address this issue prior to publication as well as to ensure that there was not an error in my assessment.  He was very open to discussing the issue and I'm happy to see he's taken steps to address this issue.  The usingOraclize it contract now has some basic upper bounds checking on the return value to the getPrice query.  He's also informed me that the resolver contract will be transferred to a multi-sig based ownership model.

The Alarm service is now available on the Testnet

You can thank avsa for pestering me to make the Alarm service available on the testnet. You'll find identical version of the service deployed onto the testnet @ 0xb8da699d7fb01289d4ef718a55c3174971092bef.

I have a scheduling server hooked up to this instance of the alarm service so if you schedule a call there you can expect for it to get executed just the same as the main net.  Reach out to me on gitter in the ethereum-alarm-clock channel if you have any questions or would like help integrating with the service.

I've also deployed the Canary contract to the testnet @ 0x6904acdd438acc322433f68fc64b9e3d5571f40c.  As of this afternoon it was alive and well.  I'll try to carve out time to get testnet canary info up on the the main Canary site as well.

Also, if you haven't seen, the latest Canary for the mainnet Alarm service is going to hit 100 heartbeats in about 6 hours (1 heartbeat every 2 hours/480 blocks).  This canary has been alive for a little over 8 days marking a milestone in demonstrating reliability.

Meanwhile I'm working hard on on the next iteration which will include improvements to the scheduling api making scheduling calls easier and more intuitive.  Time based scheduling is right around the corner, and I'm researching ways to implement triggering of calls based on events emitted by other contracts.  Stay tuned.

The story of three brave canaries.

It's been almost a week and so far I'm really happy with the Reliability Canary system.  Thus far two canaries have died.  Both of these are based on a (known) issue with the ethereum-rpc-client that causes it to overload the geth JSON-RPC server which in-turn causes the scheduling client to crash.

I've been putting some extra work into the rpc client and the newly released ethereum-ipc-client to improve their reliability.  This has primarily been focused around reducing the number of RPC calls that are made, adding some caching, and re-architecting the clients so that they don't overload the RPC server when a high number of requests are being made.

These changes have been incorperated into the ethereum-alarm-clock-client in the 0.7.2-beta1 release.  I'll be monitoring the latest canary contract as well as the scheduler process to see if these changes result in the reliability increase that I'm hoping to see.

Update 2016/01/05: The last few days have been brutal and I've got the dead canaries to prove it.  This set of canary contracts has done an awesome finding the weakest links in the Alarm service and I think it's worthwhile to go over some of the things that I found.

Canary #1  died due to two separate bugs.  The first was that the alarm client wasn't stable enough to run for expended periods of time and wasn't able to recover from certain crash conditions.  This typically occurred a few hours after launch so I had setup my scheduling server to restart the process every 10 minutes.  At the time, the client only watched for contracts who were set to be called on future blocks.  This meant that if the client got restarted at the same time that the target block was mined that the call would end up getting dropped.

Canary #2 died in an attempt to fix the client stability by switching to interacting with geth over a socket.  I thought that this would be more reliable than making HTTP requests to the JSON-RPC server.  When I deployed it for testing, everything seemed fine for a while but sometime a bit more than a day in the IPC client crashed.  As part of the development of the IPC client, I implemented a system to allow the client to be interacted with from asynchronous code, but to have the client only make requests synchronously.  After this crash I realized I might be able to fix the RPC client's reliability issues with the same approach as to avoid overloading the RPC server.

Canary #3 and #4 happened because my scheduling server dropped all of it's geth peers twice.

Canary #5 happened because now that the RPC Client didn't get restarted every 10 minutes a new bug was exposed where I was re-adding the same handler to a logger which caused the server to run out of file descriptors since one of those handlers was writing to a log file.

It's worth noting that 100% of these failures were due to code that interacts with the alarm service and not the service itself.

At the time of writing this, Canary #6 is alive and the latest deployed version of the client has been running since this morning with no apparent issues.  I've been a bit embarrassed as one canary after another died, but each of those deaths identified the weak points, and more importantly it did so in a very public way.  Rinds me of the build server lava lamp.  The positives that have come out of this are pretty cool as well.

As I found bugs in the client that only appeared in production in unexpected ways, I realized I needed a way to test the new client without switching off the old client.  This meant provisioning a new server which I have yet to automate so I'd been putting it off.  This forced the issue and I made sure to take detailed notes provisioning the new server so that I've got a starting point for automating the process.

Since I was running two schedulers, I didn't want them to be competing on calls, and thus, I needed to implement the call claiming logic in the alarm client.  The second scheduling server is also doing a better job staying connected to its peers.

Fingers crossed that the canary carnage is at an end.

Update 2016/01/07: Canary #6 died because I forgot to feed it (it ran out of ether).  Onto Canary #7.

Update 2016/01/16: Canary #7 hasn't missed a beat for 101 heartbeats (each heartbeat is 2 hours/480 blocks)! 

Say hello to the reliability canary

One of the claims that I've made about the Ethereum Alarm Clock is that it should be capable of being extremely reliable.  Today, I am proud to introduce everyone to the Reliability Canary an attempt at measuring the reliability of the Alarm service.

The canary is a contract that continually reschedules a call to itself approximately every 2 hours.  Each time this happens a counter is incremented.  If the scheduled call is not executed, the canary dies.

Source code available here

I'm still experimenting a bit with how I want this contract to work.  Once I'm happy with it and it's been running fine for a few days with no problems I'll plan on funding the contract with enough ether to last for a few weeks.

I'm hoping that I don't have to kill too many canaries.

Update: 2015-12-27

Our first canary has been euthanized in favor of a newer more healthy canary which was just deployed.  The new canary contract takes less gas per heartbeat and fixed a bug related to how it sets it's initial timestamp.

Running an execution scheduler

If you are interested in participating in the execution side of the Alarm service there is now documentation to get you started.  To help you test that your setup is working as expected there should be scheduled calls somewhere around every 1-2 hours for a few weeks while I try and help get people get their servers up and running and troubleshoot any issues they encounter.

There are also two open issues (#1 & #2) on github that should be addressed sooner than later.  I'll get to them as quick as I can, but if you'd like to dive in and give either a shot, I'll be more than happy to help point you in the right direction.

I excited to be entering this next phase of the service.  I'm hoping that the next few weeks will demonstrate the reliability that I believe the service is capable of.  Time will tell.

v0.6.0 Deployed

I'm proud to announce that the 0.6.0 release of Alarm has been deployed

This release is somewhat special in that it is the first release that facilitates long-term support for scheduled calls.  In all previous versions execution of a scheduled call involved sending a transaction to the Scheduler Contract which then executed the scheduled call.  Now, the call contracts are 100% independent from the scheduler contract which allows them to be imported into the tracking index of newly released versions of the scheduler.  This migration is fully trustless and a huge step towards a service that can be extremely long lived and provide reliable execution for contracts years into the future.

The other important aspect of this release is the alpha release of the Ethereum Alarm Clock Client, a command line utility for monitoring the Alarm service and executing scheduled calls.

To install the client:

$ pip install ethereum-alarm-clock-client

Running the client requires an unlocked ethereum node operating with the JSON-RPC server enabled.  To run the client:

$ eth_alarm scheduler
BLOCKSAGE: INFO: 2015-12-23 15:31:26,920 > Starting block sage
BLOCKSAGE: INFO: 2015-12-23 15:31:38,143 > Heartbeat: block #328 : block_time: 1.90237202068
BLOCKSAGE: INFO: 2015-12-23 15:31:43,623 > Heartbeat: block #335 : block_time: 1.75782920308
SCHEDULER: INFO: 2015-12-23 15:31:56,415 Tracking call: 0xa4a1b0d99e5271dd236a7f2abe30f81bba67dd90
CALL-0XA4A1B0D99E5271DD236A7F2ABE30F81BBA67DD90: INFO: 2015-12-23 15:31:56,415 Sleeping until 377
BLOCKSAGE: INFO: 2015-12-23 15:31:58,326 > Heartbeat: block #340 : block_time: 1.89721174014
BLOCKSAGE: INFO: 2015-12-23 15:32:06,473 > Heartbeat: block #346 : block_time: 2.07706735856
BLOCKSAGE: INFO: 2015-12-23 15:32:12,427 > Heartbeat: block #352 : block_time: 1.78518210439
BLOCKSAGE: INFO: 2015-12-23 15:32:24,904 > Heartbeat: block #357 : block_time: 1.67715797869
BLOCKSAGE: INFO: 2015-12-23 15:32:32,134 > Heartbeat: block #363 : block_time: 2.02664816647
BLOCKSAGE: INFO: 2015-12-23 15:32:41,400 > Heartbeat: block #368 : block_time: 1.70622547582
BLOCKSAGE: INFO: 2015-12-23 15:32:48,291 > Heartbeat: block #373 : block_time: 1.59583837187
BLOCKSAGE: INFO: 2015-12-23 15:32:53,134 > Heartbeat: block #378 : block_time: 1.51536617309
CALL-0XA4A1B0D99E5271DD236A7F2ABE30F81BBA67DD90: INFO: 2015-12-23 15:32:55,419 Entering call loop
CALL-0XA4A1B0D99E5271DD236A7F2ABE30F81BBA67DD90: INFO: 2015-12-23 15:32:55,452 Attempting to execute call
CALL-0XA4A1B0D99E5271DD236A7F2ABE30F81BBA67DD90: INFO: 2015-12-23 15:32:59,473 Transaction accepted.

The above output shows the terminal output of that you could expect to see when the script finds an upcoming call to be executed.

Happy Scheduling

Christmas?

Getting a lot closer to the 0.6.0 release.  Test suite is all green and I'm about halfway through updating the documentation.  There are a few things that I'm really excited about that I wanted to go ahead and get written up.

Call Portability

Starting with this release, there will be a trustless mechanism for migrating scheduled calls when new versions of the service are released.  This eliminates a huge long term maintainability problem that has been outstanding since day one.

Easier Call Execution

The Caller Pool mechanism from previous releases was a clunky solution to a difficult problem.  This release eliminates the Caller Pool, making it way easier to start executing other people's scheduled calls.

In addition to this, I've started separating out the example implementation of a script that will monitor for and execute calls (and subsequently make $$).

Overall Complexity Reduction

A brief audit today shows that this release is approximately +150 / -600 source lines of code.   This is a huge complexity reduction for the codebase and something I'm quite proud of.

Timeline?

If you've been keeping track, you may have noticed that I've not done a stellar job delivering on my projected release dates for things.  I'm hoping to get this out the door this week, but the reality is that it'll take as long as it takes.

Running your own scheduler

I am working on the documentation to be able to run your own call scheduler.  I've been busier than I expected since getting back from the conference so it's been hard to make happen.  I'm hoping to get it done over the next few weeks.

In other news, I'm making progress on solving the problems that will enable removal of the Caller Pool system, and that will make scheduled calls truly portable, allowing them to be migrated in a trustless manner into newer versions of the service.  I'm still working out some of the math and game theory so stay tuned for a write-up on how it's being solved.

The path towards an open market.

As the Alarm service has evolved a few high level goals have formed with relation to the desired economics of the system. I'd like to spend some time sharing these, because they are important to understanding the motivations behind the technical implementation.

  1. The service should be an open market.
  2. The cost of scheduled execution should be cheap (Ideally, negligible so that call scheduling can be pervasive).
  3. The execution of calls must be profitable.

In order to really understand these, we need to look at the fundamental constraints imposed by the system.

  • The cost of the scheduling transaction.
  • The cost of the executing transaction.
  • The cost of rejecting an execution transaction (like in a situation where the function has already been called).
  • Those executing calls should be motivated to use gas prices that are reasonable.

Lets consider a system in which 100 people are competing for call execution. For the sake of simplicity we will assume they are all equally likely to get their transaction accepted first, and thus, any single one of them will be first 1/100 tries. This means, that over the course of 100 calls, they will spend the rejected transaction cost 99 times, and get paid their payment plus gas reimbursement once. In this system, the payment amount would need to be more than the cost of the 99 rejected transaction for this to be profitable.

Also, in any system where there is transaction competition, call executors would be motivated to use ever increasing gas prices to incentivize miners to include their transaction over other transactions. This would further drive up the price of call scheduling.

This rules out any competition based system, meaning that at the time of call execution, and executor must know with 100% certainty that they will be fully reimbursed for their transaction costs. The ideal version of this is that for each call, the system guarantees that the number of transactions that must be reimbursed is as close to one as possible.

The caller pool system has this property at the cost of complexity for both the call executors and the Alarm service itself. An additional cost is that the caller pool is centralized in the Alarm contract, meaning that each call contract is forever tethered to the Alarm service which created it. Ideally, call contracts stand on their own without reliance on another service.

Over the last week, thanks to many discussions with other developers at Devcon, I've begun to work out what looks like a potential alternative approach to this problem. The idea revolves around two changes targeted at improving the open market aspect of the service.

  • Schedulers would specify a maximum amount P they are willing to pay for execution (above the gas costs).
  • Executors would bid on each call, with an amount B, less than P, that they are willing to perform the execution for.

It turns out that this approach suffers from the same problem as competitive transaction. If a call receives N bids, then the cost of those N bidding transactions must be covered, which in theory drives the cost of scheduling up at a rate linear to the number of bidders. In theory there would be fewer bidders since as the bidding price dropped lower and lower, fewer people would be motivated to bid.

Lets take a moment to clarify the problem statement.

  • For each call contract, the mechanism that determines which addresses are allowed to execute the contract must involve as few transactions per call as possible.
  • For a call contract to be fully independent of the Alarm service, it cannot rely on any central database. Any reliance on a central database will make migrating a call contract onto a newer version of the Alarm service harder.

I've starting seeing a potential solution for this problem by modeling it as a queue with the following properties.

  • Dynamically adjusts size based on demand.
  • Some reasonable guarantees about equality in terms of access to joining the queue.

The economics of this work out to each call having two transactions worth of overhead. One transaction to enter the queue, and one transaction to execute the call.

To keep call contracts portable, ideally the mechanisms that govern the queue length and joining mechanism can be built such that they are independent of any central state.

This portion of Alarm's evolution is still being heavily worked out, but each problem seems to be solvable. Stay tuned, and feel free to send me any ideas you have on the subject.

Musings on future design patterns for Alarm

This has been an inspirational week.  Devcon1 has been one of my first opportunities to have in person conversations about Ethereum and Alarm with people who are familiar with the subject matter.  It isn't surprising that I've had a few great ideas come out of those conversations, and I'd like to share them with you.

Separation of Data

@peterborah's talk on Contract-Oriented Programming got me thinking about a data portability as Alarm matures.  One of the unsolved problems thus far with Alarm is managing support for older deprecated versions of the API.  Those contracts will live on forever and in theory, it would be ideal for any calls that happen to get registered with them to be just as important as calls registered with the newest version of the service.  In reality, it's hard to know how that will play out.

I currently have a command line utility embedded in the the main Alarm source code that can be used pretty painlessly to monitor and execute scheduled calls.  Part of my development roadmap is to provide thorough documentation on how to use this package, and as part of that, I can potentially spend a bit of extra time making it support all of the legacy versions of the API.  This however, isn't a very scalable solution given that I expect there to be many more iterations.

The solution outlined in the talk requires a privileged function call to transfer the ownership of the data model contract onto the latest version.  For me to maintain the trustless nature of Alarm, this isn't a possibility.  The idea however got me thinking about the concept of exporting the data.  While it isn't possible for an already deployed Alarm service to learn about a newer version of the API, it is possible for a new version to be deployed with full knowledge of the old versions.

So, in the next version of Alarm, there will be a new export function which takes a call address, validates it is one of it's official addresses, and reports it to the new version of the service.  This mechanism helps solve data migration in a trustless way, as there the new version of Alarm can trust that any data coming from the old version is the original data.  This does however still leave the problem of call contract ownership.  Each call contract will only accept execution from the main alarm service, which currently only accepts execution from the proper member of the call pool.  This leads me to my second idea.

No More Caller Pools

The caller pool idea originated from a problem that showed up late in the original development.  Call execution must be profitable.  For example, if instead of a caller pool, the Alarm service just worked on a first execution wins model then economic problems start showing up for the incentives for callers to perform the execution.  Suppose there are 3 people competing to execute scheduled calls and the payments involved for doing so.  We can suppose that they would each have an equal probability in executing each call, meaning that 1/3 of the time they would be paid for their service, and 2/3 of the time they would not come in first, and thus would not be reimbursed for their gas costs.  For this to be profitable, that means that the payment must be 3x the cost of a failed attempt.  The result of this is that initially, as more call executors compete, the price for scheduling a call increases linearly with the number of people competing.  This essentially drives the cost up for those who want their calls scheduled while not increasing the profitability of call execution since the extra money is going towards paying for transactions that did not actually produce any value.

The caller pool fixes this by removing the competition in favor of having people commit to executing calls by putting a bond up.  While functional, it isn't an ideal solution as it implements a *lot* of complexity in managing the bonds and pool membership.

Alex Van De Sande had an excellent idea about how to fix this.

When a call is scheduled, instead of specifying a fixed payment amount, the scheduler will specify a maximum payment amount.  Then, starting a few hundred blocks before the scheduled execution time, anyone may bid on call.  The window for bidding will close somewhere shortly before the scheduled execution time.  At the time of the call, the bidder with the lowest bid will be given the first window to execute the call, and the next lowest bid the second window, and so on.  Each bid will be required to put down a deposit with their bid, and anyone who fails to execute the contract during their call window (assuming it hasn't already been executed) will not only be rewarded with the payment, but will also be rewarded with some portion (or all of) the bond that was put up by the previous bidders who didn't do their jobs.

This eliminates the caller pool, and removes the need for the executing transaction to come through the main alarm contract since the bidding can all occur on the call contract itself.  With the need for the call to originate from the main alarm service, this also means that a call contract can easily be exported to newer versions of the alarm service, fixing the data migration problem.

No more Suicide

The final bit of inspiration came from chriseth's talk on solidity contract writing.  There are some unfortunate side effects of suiciding a contract that I had not considered.  One is that it can result in unintended destruction of ether, if anyone sends it a transaction (with ether) after it's been suicided.  Ultimately, suicide was an easy mechanism for returning the remaining funds to the scheduler, but this has made me realize that it is better to just send them back, and then disable the contract entirely.  This way, all of the call information is preserved on the chain, and anyone who accidentally sends a transaction to the contract containing ether won't lose it.

Less fully formed ideas.

I've had a number of other ideas but they are less fully formed, and I am less sure they are good ideas.  Here they are in their infant form.

Tokens?

How can a token based system improve the service?  Tokens are one way to remove the fee system and instead introduce a more public ownership model.  Tokens could be created on each executed call as well as placing a small percentage of the payment into a central pool.  This could be thought of as dividends being paid to shareholders.  I'm honestly unsure of the specifics of how such a system should work, or whether it improves the service in any way.

Crowdfund?

I've struggled with two seemingly conflicting goals.  One is to have the service be trustless and something that can be considered public property.  The other is that I wouldn't mind making a living off of the work that I put into it.  Originally, the service had a hard coded fee that was sent to my ethereum address.  My most recent release, changes this to be non-compulsary, allow my fee to be set to whatever value a scheduler chooses.  One possible way to remove this entirely is to crowdfund the project and sell *shares* in the service.  Then, instead of having my address hard coded into the service, the funds could be distributed to the shareholders.

Again, I'm unsure of whether this idea improves the service in any way, or whether it's even something that people would invest in.

Last Thoughs

Ultimately, I want to do things that allow for Alarm to be a pillar that other applications can build on, knowing that it will serve the ecosystem reliably for many many years to come.  Because of this, I won't be doing anything that I'm not 100% sure moves the needle in this direction, and specifically I won't be doing any of the high level ideas (tokens, crowdfunding) without expressed community support.

v0.5.0 Deployed

I've just finished up deployment and testing of the 0.5.0 release of the Ethereum Alarm Service.  This release introduces a number of major API improvements that should make integrating with the service simpler and easier.

In previous versions of Alarm, all of the data was stored in a single central contract.  This release changes the service to deploy a new contract for each scheduled call.  Each of these call contracts manages it's own gas money, which has simplified the accounting logic and allowed for automatic reimbursement of unused gas money (previously you would have to withdraw it manually).

This version also introduces major changes in the payments/fees that go to the call executor and myself.  In the previous versions these values were calculated from the gas used, making it unpredictable how much a call would be worth, as well as giving schedulers no control over how much they would like to pay for their call to be executed.  This release opens the Alarm service up as a free market where schedulers can choose what payment and fee amount they want to pay, and in turn, executors can know how much they will get paid for execution.

The authentication API has also been deprecated and removed.  Previously, it was not possible for a contract being called by the Alarm service to query information about the scheduled call, such as who scheduled it.  In this version, a contract can look at msg.sender and lookup the call scheduler with the Alarm service, eliminating the need for the Relay contracts and the authorization of schedulers.

This release marks a big step for the service.  Moving the call execution into individual contracts opens up a much simpler path to new types of scheduled calls.  The first that you are likely to see are calls set for specific times (rather than blocks), and then recurring calls.

Devcon1 and other musings

Getting geared up to leave for London this Saturday to attend Devcon next week.  It's been an amazing couple of months since the Frontier network went live.  I'm looking forward to meetingthe people behind the screen names that I've been chatting with through various channels.  Please feel free to seek me out during the conference.  The best way to contact me during the conference is likely on twitter via @pipermerriam.

On Tuesday I'll be on two panels.   During the Middleware & On-Chain services I'm hoping to get a chance to talk about some of the more philosophical ideas I have surrounding how these services should be designed.  For the first time in history, we have an opportunity to create truly trustless services. It's going to take the whole community embracing some standards about what that really means and holding each other to them.

I've been scrambling on the latest iteration of the Alarm service as well which I'm hoping to get done by the conference.  The next version is going to introduce some major API changes which simplify a lot of the complexity involved in scheduling a call, as well as managing the up-front gas money required for the call to execute.

Safe travels to everyone who's headed that way.  See you there.

 

Quick Left and Ethereum Consulting

In the real world I am gainfully employed by a wonderful company named Quick Left in Boulder Colorado.  Last week I approached our head of engineering about idea of taking on Ethereum based consulting work after receiving a few recruiting emails for Ethereum based startups. 

Being able to spend my normal work day helping expand the Ethereum ecosystem without some of the risk involved in joining a startup is really appealing to me.  A handful of years ago, that sort of risk would have been possible, but having a house and a family and all of the things that go along with those has me appreciating the stability that my job affords me.

I'm happy to say Quick Left was really supportive, if not downright excited about the idea.  If you'd be interested in having me work on your app, feel free to get in touch via pmerriam@quickleft.com and we can see if it would be a good fit. 

It's really cool to work for a company supports its engineers.

And lastly, you can read my introduction to Ethereum blog post on the Quick Left blog.

One reason to start using Solidity Libraries

There's a design pattern that arose out of my most recent refactoring that is worth sharing.  It involves pushing all of the functionality for a contract into a library and having the contract functions merely delegate out to the library functions.  By doing this, you can make significant reductions in the deploy costs for contracts which are going to be deployed a lot of times.

Lets look at how this would work for a modified version of the greeter contract.

contract GreeterA {
        bytes32 greeting;
        uint count;
        address[] greeted;

        function GreeterA(bytes32 _greeting) {
                greeting = _greeting;
        }

        function greet() public returns (bytes32) {
                count += 1;
                greeted.push(msg.sender);
                return greeting;
        }
}

This is a pretty simple contract, but it can be refactored to delegate it's functionality out to a library.

library GreeterLib {
        struct Greeting {
                bytes32 greeting;
                uint count;
                address[] greeted;
        }

        function greet(Greeting storage self, address who) public returns (bytes32) {
                self.count += 1;
                self.greeted.push(who);
                return self.greeting;
        }
}


contract Greeter {
        GreeterLib.Greeting greeting;

        function Greeter(bytes32 _greeting) {
                greeting.greeting = _greeting;
        }

        function greet() public returns (bytes32) {
                return GreeterLib.greet(greeting, msg.sender);
        }
}

Benchmarking these two contracts yields the following results.

  • Call gas cost
    • Normal: 81883
    • Library: 82185
  • Deploy gas cost
    • Normal: 87856
    • Library: 80482

The library contract costs an additional 302 gas to call, but it's 7,374 gas cheaper to deploy.  For small contracts which are deployed often and only called a small number of times, the gas savings are significant.  The reason for this difference comes from the fact that the library version doesn't include much more than CALLCODE operations that delegate to the deployed library contract.

It's worth pointing out that the library version of the contract passes msg.sender into the library call.  When you access msg.sender from within a library, the address is that of the contract which called the library function as opposed to the address that called the contract itself.