PoCo #1 — About trust and agents incentives

Who am I?

My name is Hadrien Croubois and I am a Ph.D. student at ENS de Lyon. My research as a Ph.D. student focuses on middleware design for the management of shared Cloud-based scientific-computing platforms; and more particularly how to optimise them for workflow execution. However, my interests are much broader and include HPC, physics, and biology large-scale simulations, image rendering, machine learning and of course cryptography and blockchain technologies.

Since September 2017 I am also a scientific consultant for iExec. I met Gilles at ENS de Lyon and it was the perfect opportunity for me to experience working in a team designing innovative solutions.

My role as a member of this team is to study existing work from the research community and provide insight into the design of a proof-of-contribution protocol for iExec. This article is by no means a solution to this complex issue. It is rather an overview of our understanding and ideas regarding this issue.

Why iExec needs Proof-of-Contribution?

The iExec platform provides a network where application provider, workers, and users can gather and work together. Trust between these agents is to be achieved through the use of blockchain technology (Nakamoto consensus) and cryptography.

Our infrastructure is divided into 3 agents:

           
  • Application providers: They provide applications, which are seen as services. These applications can be called by the users with specific parameters. Application providers are paid for each execution of their application. The applications rely on the iExec smart contract to manage communications between the ethereum blockchain and the off-chain computing platform.
  •        
  • Users: They are the clients of the infrastructure. They pay to obtains results computed by the application.
  •        
  • Workers: They are computing entities that provide computing resources. These resources are used for the off-chain execution of the applications. Workers are paid based on their contribution to the computation of the applications.

The goal of the Proof-of-Contribution protocol is to achieve trust between the different agents, and more particularly between users and workers, in order for the users to be able to rely on the results computed by an external actor whose incentive is, at best, based on income. In particular, we want to achieve protection against Byzantine workers (who could provide bad results to penalize users) and users (who could argue against legitimate work performed by legitimate workers).

First approach: the result contribution validation scheme

Validation of the work performed by the worker can be achieved in two different ways:

           
  • Majority voting on the (hash of the) result. This helps mitigate against Byzantine workers but at the price of computing power overhead. Validating the result for a specific execution requires multiple workers to compute it, thus multiplying the execution cost by a factor m. In desktop grid or volunteer computing platforms (BOINC), this factor m can range from 3 all the way to 20~50. With more replication come more confidence in the result, but that also means that the reward is shared among more worker, reducing the incentive to the workers to contribute.
  •        
  • Relying on a court system to solve conflicts between users and workers (TrueBit). This solution is however complicated both in terms of efforts from the users, who have to check every single result and from the platform which has to implement complex arbitration mechanisms. While this method does not require the work to be executed many times, the arbitration mechanism might call for heavy instrumentation of the execution in order for the worker to provide elements of proof if their execution is challenged.

A significant contribution was published by Luis Sarmenta (2002. Sabotage-tolerance mechanisms for volunteer computing systems. Future Generation Computer Systems, 18(4), 561–572). The proposed approach is based on majority voting but rather than relying on a fixed m factor, it dynamically “decides” how many contributions are necessary to achieve consensus (within a specific confidence level). The replication level is therefore dynamic and automatically adapted, during execution, by the scheduler. This helps to achieve fast consensus when possible and to solve any conflicts.

Fig 3 from Sarmenta’s paper, describing how workers contribute to different jobs by voting on the result
Fig 3 from Sarmenta’s paper, describing how workers contribute to different jobs by voting on the result.

This approach relies on worker reputation to limit the potential impact of Byzantine agents and to achieve consensus. Yet this approach is designed for desktop grid infrastructures, where money is out of the equation. Using the financial incentive of the different actors, we can modify and improve their approach to better fit our context:

           
  • Each worker retribution for computing a task can be indexed on their impact on the consensus for this task. In addition, having a good reputation helps to achieve fast consensus with fewer agents (meaning a bigger share for each agent). This gives the workers a financial incentive to act well and have their reputation go up.
  •        
  • Workers are required to commit a security deposit (stake) which is seized in case of bad behavior. This gives the worker an additional financial incentive to behave correctly.

The main drawback of Sarmenta’s article is the assumption that Byzantine workers are not working together and do not coordinate their attacks. While this assumption does not hold in our context, we believe we can still achieve it by selecting workers randomly among the worker pool. Therefore Byzantine workers controlled by a single entity should statistically be dispatched on many different tasks and should therefore not be able to overtake the vote for a specific task.

Adapting Sarmenta’s result certification mechanism to off-chain execution

While Sarmenta’s work is interesting, a few modifications are required to work in our context. In this section, we discuss preliminary ideas on how we believe this work could be adapted to iExec needs. Our idea is to orchestrate the exchanges between the users and the workers as described below.

In addition to the users and workers, we have an additional component: the scheduler. Schedulers manage pools of worker and act as middlemen between the blockchain (listening to the iExec smart-contract) and the workers. A scheduler can, in fact, be composed of multiple programs which complementary features but we will here consider it as a single “virtual” entity.

One should notice that our discussion here does not deal with the scheduling algorithm itself. In a scheduler, the scheduling algorithm handles the logic responsible for the placement of jobs and handles execution errors. The scheduler is free to use any scheduling algorithm it desires as long as it can deal with step 3 and 5 of the following protocol.

           
  1. Workers register themselves to a scheduler.
  2.        
  3. Users submit tasks to scheduler managing the work pool they chose.
  4.        
  5. Workers ask the scheduler for work to execute. The scheduler gives them tasks to be executed. Note: If we are coming from step 5 we should not ask a worker to compute a task it has already contributed to.
  6.        
  7. The worker computes the result (A) of the task. In order for this result to be validated, the platform has to achieve a consensus on this result. This is achieved through Sarmenta’s voting. In order to contribute to this consensus, the worker commits the result to the scheduler:a. Generate and memorize (but not publish) a random value r (private disposable personal identifier).b. Submit a transaction (contribution) with :i. hash(A) → used to vote on an answer;ii. hash(r) → used as a public disposable personal identifier;iii. hash(A+r) → used as proof of knowledge of A;iv. commitment fund (with a minimum value) → incentive to only commit good results (see later). A higher commitment fund increases the Cr (cf Sarmenta, L.F.) and thus increases the potential returns (see later);v. A tamper-proof timestamp → Used by the worker to prove its contribution and claim its reward.
  8.        
  9. With each new vote (contribution) by the workers, the scheduler checks if an answer (hash(A)) achieves the expected likelihood threshold using Sarmenta’s voting.a. If we do not have a consensus, the scheduler will ask more nodes to compute the same task (dynamic replication) and contribute to the consensus → go back to 3;b. If we have a consensus continue to 6.
  10.        
  11. An answer has been selected. The scheduler can now:a. Publish the elected hash(A). At this point no new contribution is possible.b. Ask the winning workers for A and r. Having a value of r which matched a correct transaction dating from before the election result is a proof of contribution. At this point A can be published by any worker. The value for r shows that a worker knew the answer they voted for before the results of the election. That way they cannot claim a contribution by just submitting a transaction with the hash(A) published by other voters.c. Check the correctness of each worker contribution.d. Put the deposit fund (stake) of all workers who voted for another answer in the reward kitty.e. Distribute the reward kitty (users payment + deposit fund from wrong workers) among the winning workers proportionally to their contribution (Cr value computed from the reputation and the funds committed to the vote). The scheduler may take a commission for its work.f. Increase the reputation of winners, decrease (reset) the reputation of losers.g. Send the, now validated, answer to the user.
Equations used by Sarmenta to compute the credibility of a result from the credibility of the voters
Equations used by Sarmenta to compute the credibility of a result from the credibility of the voters.

Trust level, worker pools, and billing policy

Sarmenta’s voting helps to achieve the given level of confidence using worker reputation and dynamic replication. This confidence level is defined by a value ε which describes the acceptable error margin. Results should only be returned if a confidence level higher than 1-ε is achieved.

This value is a balance between cost and trust. A lower ε means more confidence in the result, but also requires more reputation/contributions to achieve consensus, and therefore more work to be performed. While this value could be defined by the user for each task, they might not know how to set it and it might cause billing issues.

We believe this value should be fixed for a worker pool. Therefore the billing policy could be defined for a worker pool depending on the performance of the workers (speed) and the ε value used by this worker pool scheduler (level of confidence).

The user would then be free to choose between worker pools. Some worker pools might only contain large nodes running technology like Intel SGX to achieve fast result with low replication. Other worker pools could contain (slower) desktop computers and have their consensus settings adapted to this context.

With consensus managed by the scheduler and financial opportunities for late voters provided by the security deposit of opposing voters, the users should not worry about anything. Users pay for a task to be executed on a pool of worker, regardless of the number of workers that end up involved in the consensus.

If consensus is fast and easy the payment of the user is enough to retribute the few workers who took part in the vote. If the consensus is hard and requires a lot of contributions, the workers are retributed using the security deposit of losing voters. This gives the workers a financial incentive to contribute to a consensus with many voters without requiring the user to pay more.

In the current version of this work, the protocol is such as the user has no part in the consensus. Payments are done when submitting the task and no stake is required. Results are public and guaranteed by the consensus. Users can therefore not discuss a result.

Assumptions and agents incentives

We believe the protocol described previously to be secure providing a few assumptions are met :

           
  1. The first strong assumption is the ability of workers to publish their transaction (contribution) in a public manner. The medium used to publish those contributions has to provide a secure way for anyone to verify that contribution have been done prior to the election results. This can simply be achieved using current blockchain technology such as ethereum smart contracts. Still, that should not prevent us from considering other approaches like DHT (distributed hash tables).
  2.        
  3. The second assumption is that the voting algorithm will, in fact, give good results. This assumption is equivalent to saying that 51% of the reputation (of a worker pool) is not controlled by a single malicious user. We believe this is not a flaw of the protocol for two reasons:a. All voting based systems, including the Nakamoto protocol, are subject to such attacks. This flaw is not in the design of the protocol.b. There are strong (financial) penalties for bad actions on the platform and spot checking can be enforced to give more power to the scheduler and help them deal with bad actors. It is a matter of balance between the scheduler and the workers to enable spot-checking or not. We can imagine multiple worker pools, run by different independent schedulers which specific policy. Ultimately those pools could compete to attract the users (with elements such as the achieved quality of results and pricing).

Finally, we believe that both scheduler and workers will be inclined to work correctly in order to provide a good service to the users and benefit from the iExec ecosystem. Having 51% of the reputation controlled by actors wanting to do things right and benefit from it should not be an issue.

Incentives for the different agents are as follows

           
  1. Users: They are requesting work to be done, and money in a healthy system would only come from them. User incentive to use the platform is to obtain good results for a low price. This will lead them to create a competition between worker pools. Their ability to chose or boycott worker pools create an incentive for workers and schedulers to work together in order to achieve the best service possible and attract users.
  2.        
  3. Workers: Their incentive is to gain as much money as possible for their work. To maximize their gain, they should maximize their contribution. Contribution can be obtained by having a good history (reputation) and/or by committing more funds when submitting a contribution. Giving bad results would make them lose both funds and reputation, which they should avoid at all cost.a. New actors, with no history, start with a low reputation, meaning they will weigh less in the vote. Their chance to overtake a vote against trusted workers is small, and it would be a waste of fund from an attacker.b. An old actor with a good history can win a lot by using their reputation to perform computations. As they are trusted, fewer contributions are needed to settle a vote and the reward kitty is therefore shared among fewer agents. On the other hand, by submitting bad results they risk losing all their reputation (and the money they committed with the contribution). Reputation does not guarantee them to win votes and spot-checking can help to detect bad contributors with high reputation.
  4.        
  5. Scheduler: Their incentive is to gain money by helping coordinate the platform. They make money through:a. Commissions on all transactions;b. Unclaimed rewards: if a worker doesn’t claim the reward after a contribution the corresponding fund would be kept by the scheduler.

In order to make money, the scheduler requires users to submit jobs and workers to register in its worker pool. This gives him the incentive to manage the worker pool correctly and grow strong.

Public schedulers for a fully decentralized platform

One of the key elements that could ultimately help a scheduler getting bigger and attracting more workers and users is to be open about its decisions. We believe that a scheduler could rely on a blockchain mechanism to orchestrate the protocol described above.

In fact, this protocol is designed so that every message can, and should, be public. Security is achieved using cryptography. In particular, the use of a blockchain solves the issue of proving a contribution existence (presence on the blockchain) and validity (precedence to the vote results).

The main issue that still has to be solved is the worker designations. At step 3, the scheduler submits the task to specific workers. This is important for two reasons:

We don’t want workers to race. This would favor fast nodes and one could attack the voting system by coordinating many fast nodes to take over the vote before other nodes can contribute.

We don’t want malicious nodes to take over some votes. By randomly assigning workers to jobs we distribute malicious nodes amongst many votes where they would not be able to take over and where their best play is to provide good results and benefit from the platform working correctly.

Such a mechanism requires a source of randomness which any observers of the blockchain can agree on. This problem is beyond the scope of this post. Having such a source of entropy could help the scheduler designate workers using a random yet verifiable algorithm.

The data required for verification would be public. The only change required to the protocol would be that a valid contribution from a worker would require a proof that the worker was designated by a scheduler.