In its early days, the Internet started as a symmetric peer-to-peer decentralized network of computers. As time passed, it became more asymmetric and concentrated in a few data centers with billions of PCs and laptops on the edges. The reasons Internet started as peer-to-peer decentralized networks are scalability, high fault tolerance, and resilience to censorship. However security is a major drawback in these types of networks, as malicious nodes inevitably join the network. These malicious nodes can flood the network with invalid packets, thus preventing the packets from being delivered causing a flood attack.
Another common attack is Man in the Middle (MitM) attack in which an attacker places himself between two peer nodes in the network. Such an attack can remain undetected, as long as the attacker remains passive, enabling the attacker to eavesdrop communications between the two nodes. As a result, the attacker can assume the identity of both the peer nodes, compromise one or both nodes, and try to infiltrate the network.
What are Sybil Attacks and Sockpuppets?
Sybil attacks are another security vulnerability specific to peer-to-peer decentralized networks which are open and therefore allow anonymous entrants. The attack is named after the subject of the book Sybil, which deals with the case study of a woman diagnosed with Dissociative Identity Disorder. The main component of the Sybil attack comprises of creating a large number of pseudonymous identities. Once the identities are accepted as peers they try to gain control of the network and subvert the network from within. The network’s resilience depends on how easy it is to create an identity and be accepted as a peer. As there is no 100% failproof firewall against these types of attacks, the best defense against Sybil attacks is to make it as impractical as possible.
Sockpuppet is a term that implies using many online identities for the sole objective of deceiving online communities.
A sockpuppet is an online identity used for purposes of deception. The term, a reference to the manipulation of a simple hand puppet made from a sock, originally referred to a false identity assumed by a member of an Internet community who spoke to, or about, themselves while pretending to be another person. A significant difference between the use of a pseudonym and the creation of a sockpuppet is that the sockpuppet poses as an independent third-party unaffiliated with the puppeteer. [source]
What is 51% attack?
A “51% attack” means a bad guy getting as much computing power as the entire rest of the Bitcoin network combined, plus a little bit more. [source]
In his white paper Satoshi proposed proof-of-work. The main purpose of this algorithm is to minimize 51% attacks. However proof of work does not completely eliminate the 51% attack. If a bad guy tries to launch an attack, the algorithm makes it harder, as it requires a lot of resources to take down the hashing power of 51% of the nodes constituting the Bitcoin network. I would therefore like to discuss the possibilities of mitigating these risks by proof-of-reputation.
Why centralized proof-of-work increases the risk of 51% attack?
Let us imagine a case scenario where the proof-of-work is centralized in a few data centers. As a result, whoever controls the data centers can intentionally manipulate the proof-of-work algorithm of the decentralized network. It is also feasible for hackers to gain total control of the network. It will play out exactly the same way as a centralized Bitcoin exchange getting hacked. Thus, if we centralize proof-of-work it magnifies the risk of attack rather than mitigating it.
Why delegated proof-of-stake is equivalent to centralized proof-of-work?
Delegated proof of stake magnifies the risk of 51% attack similarly to centralized proof-of-work. It is easier to corrupt 100 delegates than to corrupt 51 percent of the stakeholders.
A bank is an example of a hybrid delegated proof-of-stake and fractional reserve system. When a user deposits 100 pieces of silver coins into a bank, the user delegates his stake of silver to the bank. Then the bank releases a token of 10000 notes based on the user’s 100 pieces of silver coins.
The issue with banks is that it involves trust like any delegated proof-of-stake system. If the trust is violated, damage is multiplied due to the fractional reserve system. Thus delegated proof-of-stake cannot be classified as a decentralized system because one has to trust a third party to delegate his stake. In the long run, more users tend to delegate their stakes because of brand loyalty, user friendliness etc. This leads to more centralization, violation of the trust, dilution and corruption of the whole stake.
The motive behind proof-of-work is based on the control of processing power while proof-of-stake is based on the percentage of wealth. It is very easy to corrupt both. On the other hand the motive behind proof-of-reputation is based on ethics and morality which is very resilient to corruption.
Let us examine the proof-of-reputation in depth and its implications. Assuming there are 10 anonymous generals who don’t trust each other but are willing to undertake an invasion by providing 1000 soldiers each. In return they are willing to settle with one tenth of the spoils. It is highly probable for a general to either have 2 to 3 sockpuppets, to conspire with another 5 generals or the combination of the two.
Let us now bring in the proof-of-reputation in the equation. Say for example each general has a score for proof-of-reputation which is based on how many of the 1000 soldiers like them. It is very difficult to gain good reputation for all the 3 sockpuppets even if they tend to have similar reputation as it negates the purpose of the sockpuppets. If a general tries to conspire with another 5 generals, it will be very difficult to conspire with all the 5 generals with good reputation. This is because each one will have to risk his reputation.
In a decentralized peer-to-peer network, it is next to impossible to corrupt 51% proof-of-work, 51% proof-of-stake and 51% proof-of-reputation of the whole network.
Implementation of Proof-of-Reputation
Proof-of-reputation can be implemented as an assurance contract which is explained as follows:
In a binding way, members of a group pledge to contribute to action A if a total contribution level is reached. If the threshold level is met, the action is taken, and the public good is provided; otherwise, the parties are not bound to carry through the action and any monetary contributions are refunded. [source]
The problem with an assurance contract is that it enables free riders. Free riders are those who do not contribute to the public good but reap the benefits of the public good at the cost of other contributors. In order to eliminate the problem of free riders, Alex Tabarrok proposed Dominant Assurance Contract by publishing a white paper. Dominant Assurance Contracts not only define the monetary incentive and expiry date as in Assurance Contracts but also add another parameter known as minimum number of contributors required for the contract to come into effect.
Therefore proof-of-reputation has to be implemented as a Dominant Assurance Contract to discourage free riders. One method of implementation is based on semi-trusted oracles. Gavin Andresen explains the implementation as follows.
So I’ll start there, and imagine that there are semi-trusted ‘oracles’ that compete to be the most reliable and trustworthy verifiers of contracts. People involved in contracts choose N of them, and then require that contract conditions be validated by one or more of them before the contract pays out. Pick more than one so no single oracle can steal the contract’s funds, but less than N in case some of them go out of business or just aren’t around to validate contracts when it is time for the contract to pay out.
These oracles need an agreed-upon, machine-readable contract language, but that shouldn’t be hard. There are lots of interesting design decisions on what information contract scripts have access to (and lots of not-so-interesting-to-me design decisions on the language itself; is it stack-based, register-based, high-level, low-level bytecode, etc etc etc). [source]
Another method of implementation is by awarding tokens to miners based on honesty and integrity. Tokens are basically an implementation of the assurance contract to make sure that the motives of the miners and end users are aligned for the common good. For example, if the mining pool operators will tweak their mining rigs between 10-20 percent for a period of time then the operators will have an incentive to be honest and earn reputation as tokens in addition to mining incentives. If a miner is using a mining pool, he can pledge 5% of his total Bitcoin mining towards the dominant assurance contract so that the mining pool will receive a reputation token which can be pegged to the market value of Bitcoin.
Tokens can also be crowd funded as a pledge by the stakeholders in the decentralized network to ensure the miners and pool operators have an incentive to be honest, hence earn reputation. The tokens can be earned or burned depending on the nature of the coin which is either inflationary or deflationary. If it has to be burnt it can be released as a token and claimed by charities.
The tokens can be issued either as 1 to n, n to n or n to 1, depending on individual requirements based on Counterparty protocol, Colored coin protocol for Bitcoin or Dogeparty protocol for Dogecoin.
Another method of implementation is using the Lighthouse platform. Lighthouse has a lightweight encrypted HD wallet. It uses payment verification by directly synchronizing with the blockchain. It also enables dominant assurance contracts for people to pledge for the projects directly using bitcoins. If they want their money back before the contract reaches its target amount, they can revoke the pledges they have already made. As the contract is entirely based on the blockchain, pledges cannot be claimed individually. They can only be claimed when the combined pledges together reaches the targeted amount.
In the LTB network, proof-of-reputation is being implemented to defend against sockpuppets which is based on token controlled access. Each piece of content is mapped to certain tokens and quantities. If the quantity is zero, the content is accessible to users. If the quantity required is more than zero, the content is then blocked.
Token-Controlled Access (TCA) is a simple idea. In a given system, different levels of access to that system are granted according to the combination of tokens in a particular user’s wallet.
Token Controlled Viewpoint (TCV) is an application of TCA to information content (forums, posts, comments, bonus content, bloopers, walkthroughs, tips, tweets, supplemental blogs, RSS feeds or other data) on basic web pages. [source]
This article is meant for informational purposes and is not an endorsement. Articles published on the LTB network are the author’s personal opinion and do not necessarily represent the opinions of the LTB network.