Bitcoin: A Case Study in Mechanism Design : Networks II ...
BCexchange, the decentralized cryptocurrency exchange for everyone
Subreddit for BCexchange (short for Blocks & Chains Decentralized Exchange). B&C Exchange will be an open-source decentralized exchange that completes cryptocurrency trades between users by utilizing multisig signers that compete for blockchain rewards based on their effectiveness and honesty. Trades occur using real cryptocurrencies such as Bitcoin and NuBits.
This article is written by the CoinEx Chain lab. CoinEx Chain is the world’s first public chain exclusively designed for DEX, and will also include a Smart Chain supporting smart contracts and a Privacy Chain protecting users’ privacy. longcpp @ 20200618 This is Part 1 of the serialized articles aimed to explain the Tendermint consensus protocol in detail. Part 1. Preliminary of the consensus protocol: security model and PBFT protocol Part 2. Tendermint consensus protocol illustrated: two-phase voting protocol and the locking and unlocking mechanism Part 3. Weighted round-robin proposer selection algorithm used in Tendermint project Any consensus agreement that is ultimately reached is the General Agreement, that is, the majority opinion. The consensus protocol on which the blockchain system operates is no exception. As a distributed system, the blockchain system aims to maintain the validity of the system. Intuitively, the validity of the blockchain system has two meanings: firstly, there is no ambiguity, and secondly, it can process requests to update its status. The former corresponds to the safety requirements of distributed systems, while the latter to the requirements of liveness. The validity of distributed systems is mainly maintained by consensus protocols, considering the multiple nodes and network communication involved in such systems may be unstable, which has brought huge challenges to the design of consensus protocols.
The semi-synchronous network model and Byzantine fault tolerance
Researchers of distributed systems characterize these problems that may occur in nodes and network communications using node failure models and network models. The fail-stop failure in node failure models refers to the situation where the node itself stops running due to configuration errors or other reasons, thus unable to go on with the consensus protocol. This type of failure will not cause side effects on other parts of the distributed system except that the node itself stops running. However, for such distributed systems as the public blockchain, when designing a consensus protocol, we still need to consider the evildoing intended by nodes besides their failure. These incidents are all included in the Byzantine Failure model, which covers all unexpected situations that may occur on the node, for example, passive downtime failures and any deviation intended by the nodes from the consensus protocol. For a better explanation, downtime failures refer to nodes’ passive running halt, and the Byzantine failure to any arbitrary deviation of nodes from the consensus protocol. Compared with the node failure model which can be roughly divided into the passive and active models, the modeling of network communication is more difficult. The network itself suffers problems of instability and communication delay. Moreover, since all network communication is ultimately completed by the node which may have a downtime failure or a Byzantine failure in itself, it is usually difficult to define whether such failure arises from the node or the network itself when a node does not receive another node's network message. Although the network communication may be affected by many factors, the researchers found that the network model can be classified by the communication delay. For example, the node may fail to send data packages due to the fail-stop failure, and as a result, the corresponding communication delay is unknown and can be any value. According to the concept of communication delay, the network communication model can be divided into the following three categories:
The synchronous network model: There is a fixed, known upper bound of delay $\Delta$ in network communication. Under this model, the maximum delay of network communication between two nodes in the network is $\Delta$. Even if there is a malicious node, the communication delay arising therefrom does not exceed $\Delta$.
The asynchronous network model: There is an unknown delay in network communication, with the upper bound of the delay known, but the message can still be successfully delivered in the end. Under this model, the network communication delay between two nodes in the network can be any possible value, that is, a malicious node, if any, can arbitrarily extend the communication delay.
The semi-synchronous network model: Assume that there is a Global Stabilization Time (GST), before which it is an asynchronous network model and after which, a synchronous network model. In other words, there is a fixed, known upper bound of delay in network communication $\Delta$. A malicious node can delay the GST arbitrarily, and there will be no notification when no GST occurs. Under this model, the delay in the delivery of the message at the time $T$ is $\Delta + max(T, GST)$.
The synchronous network model is the most ideal network environment. Every message sent through the network can be received within a predictable time, but this model cannot reflect the real network communication situation. As in a real network, network failures are inevitable from time to time, causing the failure in the assumption of the synchronous network model. Yet the asynchronous network model goes to the other extreme and cannot reflect the real network situation either. Moreover, according to the FLP (Fischer-Lynch-Paterson) theorem, under this model if there is one node fails, no consensus protocol will reach consensus in a limited time. In contrast, the semi-synchronous network model can better describe the real-world network communication situation: network communication is usually synchronous or may return to normal after a short time. Such an experience must be no stranger to everyone: the web page, which usually gets loaded quite fast, opens slowly every now and then, and you need to try before you know the network is back to normal since there is usually no notification. The peer-to-peer (P2P) network communication, which is widely used in blockchain projects, also makes it possible for a node to send and receive information from multiple network channels. It is unrealistic to keep blocking the network information transmission of a node for a long time. Therefore, all the discussion below is under the semi-synchronous network model. The design and selection of consensus protocols for public chain networks that allow nodes to dynamically join and leave need to consider possible Byzantine failures. Therefore, the consensus protocol of a public chain network is designed to guarantee the security and liveness of the network under the semi-synchronous network model on the premise of possible Byzantine failure. Researchers of distributed systems point out that to ensure the security and liveness of the system, the consensus protocol itself needs to meet three requirements:
Validity: The value reached by honest nodes must be the value proposed by one of them
Agreement: All honest nodes must reach consensus on the same value
Termination: The honest nodes must eventually reach consensus on a certain value
Validity and agreement can guarantee the security of the distributed system, that is, the honest nodes will never reach a consensus on a random value, and once the consensus is reached, all honest nodes agree on this value. Termination guarantees the liveness of distributed systems. A distributed system unable to reach consensus is useless.
The CAP theorem and Byzantine Generals Problem
In a semi-synchronous network, is it possible to design a Byzantine fault-tolerant consensus protocol that satisfies validity, agreement, and termination? How many Byzantine nodes can a system tolerance? The CAP theorem and Byzantine Generals Problem provide an answer for these two questions and have thus become the basic guidelines for the design of Byzantine fault-tolerant consensus protocols. Lamport, Shostak, and Pease abstracted the design of the consensus mechanism in the distributed system in 1982 as the Byzantine Generals Problem, which refers to such a situation as described below: several generals each lead the army to fight in the war, and their troops are stationed in different places. The generals must formulate a unified action plan for the victory. However, since the camps are far away from each other, they can only communicate with each other through the communication soldiers, or, in other words, they cannot appear on the same occasion at the same time to reach a consensus. Unfortunately, among the generals, there is a traitor or two who intend to undermine the unified actions of the loyal generals by sending the wrong information, and the communication soldiers cannot send the message to the destination by themselves. It is assumed that each communication soldier can prove the information he has brought comes from a certain general, just as in the case of a real BFT consensus protocol, each node has its public and private keys to establish an encrypted communication channel for each other to ensure that its messages will not be tampered with in the network communication, and the message receiver can also verify the sender of the message based thereon. As already mentioned, any consensus agreement ultimately reached represents the consensus of the majority. In the process of generals communicating with each other for an offensive or retreat, a general also makes decisions based on the majority opinion from the information collected by himself. According to the research of Lamport et al, if there are 1/3 or more traitors in the node, the generals cannot reach a unified decision. For example, in the following figure, assume there are 3 generals and only 1 traitor. In the figure on the left, suppose that General C is the traitor, and A and B are loyal. If A wants to launch an attack and informs B and C of such intention, yet the traitor C sends a message to B, suggesting what he has received from A is a retreat. In this case, B can't decide as he doesn't know who the traitor is, and the information received is insufficient for him to decide. If A is a traitor, he can send different messages to B and C. Then C faithfully reports to B the information he received. At this moment as B receives conflicting information, he cannot make any decisions. In both cases, even if B had received consistent information, it would be impossible for him to spot the traitor between A and C. Therefore, it is obvious that in both situations shown in the figure below, the honest General B cannot make a choice. According to this conclusion, when there are $n$ generals with at most $f$ traitors (n≤3f), the generals cannot reach a consensus if $n \leq 3f$; and with $n > 3f$, a consensus can be reached. This conclusion also suggests that when the number of Byzantine failures $f$ exceeds 1/3 of the total number of nodes $n$ in the system $f \ge n/3$ , no consensus will be reached on any consensus protocol among all honest nodes. Only when $f < n/3$, such condition is likely to happen, without loss of generality, and for the subsequent discussion on the consensus protocol, $ n \ge 3f + 1$ by default. The conclusion reached by Lamport et al. on the Byzantine Generals Problem draws a line between the possible and the impossible in the design of the Byzantine fault tolerance consensus protocol. Within the possible range, how will the consensus protocol be designed? Can both the security and liveness of distributed systems be fully guaranteed? Brewer provided the answer in his CAP theorem in 2000. It indicated that a distributed system requires the following three basic attributes, but any distributed system can only meet two of the three at the same time.
Consistency: When any node responds to the request, it must either provide the latest status information or provide no status information
Availability: Any node in the system must be able to continue reading and writing
Partition Tolerance: The system can tolerate the loss of any number of messages between two nodes and still function normally
https://preview.redd.it/1ozfwk7u7m851.png?width=1400&format=png&auto=webp&s=fdee6318de2cf1c021e636654766a7a0fe7b38b4 A distributed system aims to provide consistent services. Therefore, the consistency attribute requires that the two nodes in the system cannot provide conflicting status information or expired information, which can ensure the security of the distributed system. The availability attribute is to ensure that the system can continuously update its status and guarantee the availability of distributed systems. The partition tolerance attribute is related to the network communication delay, and, under the semi-synchronous network model, it can be the status before GST when the network is in an asynchronous status with an unknown delay in the network communication. In this condition, communicating nodes may not receive information from each other, and the network is thus considered to be in a partitioned status. Partition tolerance requires the distributed system to function normally even in network partitions. The proof of the CAP theorem can be demonstrated with the following diagram. The curve represents the network partition, and each network has four nodes, distinguished by the numbers 1, 2, 3, and 4. The distributed system stores color information, and all the status information stored by all nodes is blue at first.
Partition tolerance and availability mean the loss of consistency: When node 1 receives a new request in the leftmost image, the status changes to red, the status transition information of node 1 is passed to node 3, and node 3 also updates the status information to red. However, since node 3 and node 4 did not receive the corresponding information due to the network partition, the status information is still blue. At this moment, if the status information is queried through node 2, the blue returned by node 2 is not the latest status of the system, thus losing consistency.
Partition tolerance and consistency mean the loss of availability: In the middle figure, the initial status information of all nodes is blue. When node 1 and node 3 update the status information to red, node 2 and node 4 maintain the outdated information as blue due to network partition. Also when querying status information through node 2, you need to first ask other nodes to make sure you’re in the latest status before returning status information as node 2 needs to follow consistency, but because of the network partition, node 2 cannot receive any information from node 1 or node 3. Then node 2 cannot determine whether it is in the latest status, so it chooses not to return any information, thus depriving the system of availability.
Consistency and availability mean the loss of the partition tolerance: In the right-most figure, the system does not have a network partition at first, and both status updates and queries can go smoothly. However, once a network partition occurs, it degenerates into one of the previous two conditions. It is thus proved that any distributed system cannot have consistency, availability, and partition tolerance all at the same time.
https://preview.redd.it/456x2blv7m851.png?width=1400&format=png&auto=webp&s=550797373145b8fc1471bdde68ed5f8d45adb52b The discovery of the CAP theorem seems to declare that the aforementioned goals of the consensus protocol is impossible. However, if you’re careful enough, you may find from the above that those are all extreme cases, such as network partitions that cause the failure of information transmission, which could be rare, especially in P2P network. In the second case, the system rarely returns the same information with node 2, and the general practice is to query other nodes and return the latest status as believed after a while, regardless of whether it has received the request information of other nodes. Therefore, although the CAP theorem points out that any distributed system cannot satisfy the three attributes at the same time, it is not a binary choice, as the designer of the consensus protocol can weigh up all the three attributes according to the needs of the distributed system. However, as the communication delay is always involved in the distributed system, one always needs to choose between availability and consistency while ensuring a certain degree of partition tolerance. Specifically, in the second case, it is about the value that node 2 returns: a probably outdated value or no value. Returning the possibly outdated value may violate consistency but guarantees availability; yet returning no value deprives the system of availability but guarantees its consistency. Tendermint consensus protocol to be introduced is consistent in this trade-off. In other words, it will lose availability in some cases. The genius of Satoshi Nakamoto is that with constraints of the CAP theorem, he managed to reach a reliable Byzantine consensus in a distributed network by combining PoW mechanism, Satoshi Nakamoto consensus, and economic incentives with appropriate parameter configuration. Whether Bitcoin's mechanism design solves the Byzantine Generals Problem has remained a dispute among academicians. Garay, Kiayias, and Leonardos analyzed the link between Bitcoin mechanism design and the Byzantine consensus in detail in their paper The Bitcoin Backbone Protocol: Analysis and Applications. In simple terms, the Satoshi Consensus is a probabilistic Byzantine fault-tolerant consensus protocol that depends on such conditions as the network communication environment and the proportion of malicious nodes' hashrate. When the proportion of malicious nodes’ hashrate does not exceed 1/2 in a good network communication environment, the Satoshi Consensus can reliably solve the Byzantine consensus problem in a distributed environment. However, when the environment turns bad, even with the proportion within 1/2, the Satoshi Consensus may still fail to reach a reliable conclusion on the Byzantine consensus problem. It is worth noting that the quality of the network environment is relative to Bitcoin's block interval. The 10-minute block generation interval of the Bitcoin can ensure that the system is in a good network communication environment in most cases, given the fact that the broadcast time of a block in the distributed network is usually just several seconds. In addition, economic incentives can motivate most nodes to actively comply with the agreement. It is thus considered that with the current Bitcoin network parameter configuration and mechanism design, the Bitcoin mechanism design has reliably solved the Byzantine Consensus problem in the current network environment.
Practical Byzantine Fault Tolerance, PBFT
It is not an easy task to design the Byzantine fault-tolerant consensus protocol in a semi-synchronous network. The first practically usable Byzantine fault-tolerant consensus protocol is the Practical Byzantine Fault Tolerance (PBFT) designed by Castro and Liskov in 1999, the first of its kind with polynomial complexity. For a distributed system with $n$ nodes, the communication complexity is $O(n2$.) Castro and Liskov showed in the paper that by transforming centralized file system into a distributed one using the PBFT protocol, the overwall performance was only slowed down by 3%. In this section we will briefly introduce the PBFT protocol, paving the way for further detailed explanations of the Tendermint protocol and the improvements of the Tendermint protocol. The PBFT protocol that includes $n=3f+1$ nodes can tolerate up to $f$ Byzantine nodes. In the original paper of PBFT, full connection is required among all the $n$ nodes, that is, any two of the n nodes must be connected. All the nodes of the network jointly maintain the system status through network communication. In the Bitcoin network, a node can participate in or exit the consensus process through hashrate mining at any time, which is managed by the administrator, and the PFBT protocol needs to determine all the participating nodes before the protocol starts. All nodes in the PBFT protocol are divided into two categories, master nodes, and slave nodes. There is only one master node at any time, and all nodes take turns to be the master node. All nodes run in a rotation process called View, in each of which the master node will be reelected. The master node selection algorithm in PBFT is very simple: all nodes become the master node in turn by the index number. In each view, all nodes try to reach a consensus on the system status. It is worth mentioning that in the PBFT protocol, each node has its own digital signature key pair. All sent messages (including request messages from the client) need to be signed to ensure the integrity of the message in the network and the traceability of the message itself. (You can determine who sent a message based on the digital signature). The following figure shows the basic flow of the PBFT consensus protocol. Assume that the current view’s master node is node 0. Client C initiates a request to the master node 0. After the master node receives the request, it broadcasts the request to all slave nodes that process the request of client C and return the result to the client. After the client receives f+1 identical results from different nodes (based on the signature value), the result can be taken as the final result of the entire operation. Since the system can have at most f Byzantine nodes, at least one of the f+1 results received by the client comes from an honest node, and the security of the consensus protocol guarantees that all honest nodes will reach consensus on the same status. So, the feedback from 1 honest node is enough to confirm that the corresponding request has been processed by the system. https://preview.redd.it/sz8so5ly7m851.png?width=1400&format=png&auto=webp&s=d472810e76bbc202e91a25ef29a51e109a576554 For the status synchronization of all honest nodes, the PBFT protocol has two constraints on each node: on one hand, all nodes must start from the same status, and on the other, the status transition of all nodes must be definite, that is, given the same status and request, the results after the operation must be the same. Under these two constraints, as long as the entire system agrees on the processing order of all transactions, the status of all honest nodes will be consistent. This is also the main purpose of the PBFT protocol: to reach a consensus on the order of transactions between all nodes, thereby ensuring the security of the entire distributed system. In terms of availability, the PBFT consensus protocol relies on a timeout mechanism to find anomalies in the consensus process and start the View Change protocol in time to try to reach a consensus again. The figure above shows a simplified workflow of the PBFT protocol. Where C is the client, 0, 1, 2, and 3 represent 4 nodes respectively. Specifically, 0 is the master node of the current view, 1, 2, 3 are slave nodes, and node 3 is faulty. Under normal circumstances, the PBFT consensus protocol reaches consensus on the order of transactions between nodes through a three-phase protocol. These three phases are respectively: Pre-Prepare, Prepare, and Commit:
The master node of the pre-preparation node is responsible for assigning the sequence number to the received client request, and broadcasting the message to the slave node. The message contains the hash value of the client request d, the sequence number of the current viewv, the sequence number n assigned by the master node to the request, and the signature information of the master nodesig. The scheme design of the PBFT protocol separates the request transmission from the request sequencing process, and the request transmission is not to be discussed here. The slave node that receives the message accepts the message after confirming the message is legitimate and enter preparation phase. The message in this step checks the basic signature, hash value, current view, and, most importantly, whether the master node has given the same sequence number to other request from the client in the current view.
In preparation, the slave node broadcasts the message to all nodes (including itself), indicating that it assigns the sequence number n to the client request with the hash value d under the current view v, with its signaturesig as proof. The node receiving the message will check the correctness of the signature, the matching of the view sequence number, etc., and accept the legitimate message. When the PRE-PREPARE message about a client request (from the main node) received by a node matches with the PREPARE from 2f slave nodes, the system has agreed on the sequence number requested by the client in the current view. This means that 2f+1 nodes in the current view agree with the request sequence number. Since it contains information from at most fmalicious nodes, there are a total of f+1 honest nodes that have agreed with the allocation of the request sequence number. With f malicious nodes, there are a total of 2f+1 honest nodes, so f+1represents the majority of the honest nodes, which is the consensus of the majority mentioned before.
After the node (including the master node and the slave node) receives a PRE-PREPARE message requested by the client and 2f PREPARE messages, the message is broadcast across the network and enters the submission phase. This message is used to indicate that the node has observed that the whole network has reached a consensus on the sequence number allocation of the request message from the client. When the node receives 2f+1 COMMIT messages, there are at least f+1 honest nodes, that is, most of the honest nodes have observed that the entire network has reached consensus on the arrangement of sequence numbers of the request message from the client. The node can process the client request and return the execution result to the client at this moment.
Roughly speaking, in the pre-preparation phase, the master node assigns a sequence number to all new client requests. During preparation, all nodes reach consensus on the client request sequence number in this view, while in submission the consistency of the request sequence number of the client in different views is to be guaranteed. In addition, the design of the PBFT protocol itself does not require the request message to be submitted by the assigned sequence number, but out of order. That can improve the efficiency of the implementation of the consensus protocol. Yet, the messages are still processed by the sequence number assigned by the consensus protocol for the consistency of the distributed system. In the three-phase protocol execution of the PBFT protocol, in addition to maintaining the status information of the distributed system, the node itself also needs to log all kinds of consensus information it receives. The gradual accumulation of logs will consume considerable system resources. Therefore, the PBFT protocol additionally defines checkpoints to help the node deal with garbage collection. You can set a checkpoint every 100 or 1000 sequence numbers according to the request sequence number. After the client request at the checkpoint is executed, the node broadcasts messages throughout the network, indicating that after the node executes the client request with sequence number n, the hash value of the system status is d, and it is vouched by its own signature sig. After 2f+1 matching CHECKPOINT messages (one of which can come from the node itself) are received, most of the honest nodes in the entire network have reached a consensus on the system status after the execution of the client request with the sequence numbern, and then you can clear all relevant log records of client requests with the sequence number less than n. The node needs to save these2f+1 CHECKPOINTmessages as proof of the legitimate status at this moment, and the corresponding checkpoint is called a stable checkpoint. The three-phase protocol of the PBFT protocol can ensure the consistency of the processing order of the client request, and the checkpoint mechanism is set to help nodes perform garbage collection and further ensures the status consistency of the distributed system, both of which can guarantee the security of the distributed system aforementioned. How is the availability of the distributed system guaranteed? In the semi-synchronous network model, a timeout mechanism is usually introduced, which is related to delays in the network environment. It is assumed that the network delay has a known upper bound after GST. In such condition, an initial value is usually set according to the network condition of the system deployed. In case of a timeout event, besides the corresponding processing flow triggered, additional mechanisms will be activated to readjust the waiting time. For example, an algorithm like TCP's exponential back off can be adopted to adjust the waiting time after a timeout event. To ensure the availability of the system in the PBFT protocol, a timeout mechanism is also introduced. In addition, due to the potential the Byzantine failure in the master node itself, the PBFT protocol also needs to ensure the security and availability of the system in this case. When the Byzantine failure occurs in the master node, for example, when the slave node does not receive the PRE-PREPARE message or the PRE-PREPARE message sent by the master node from the master node within the time window and is thus determined to be illegitimate, the slave node can broadcast to the entire network, indicating that the node requests to switch to the new view with sequence number v+1. n indicates the request sequence number corresponding to the latest stable checkpoint local to the node, and C is to prove the stable checkpoint 2f+1 legitimate CHECKPOINT messages as aforementioned. After the latest stable checkpoint and before initiating the VIEWCHANGE message, the system may have reached a consensus on the sequence numbers of some request messages in the previous view. To ensure the consistency of these request sequence numbers to be switched in the view, the VIEWCHANGE message needs to carry this kind of the information to the new view, which is also the meaning of the P field in the message. P contains all the client request messages collected at the node with a request sequence number greater than n and the proof that a consensus has been reached on the sequence number in the node: the legitimate PRE-PREPARE message of the request and 2f matching PREPARE messages. When the master node in view v+1 collects 2f+1 VIEWCHANGE messages, it can broadcast the NEW-VIEW message and take the entire system into a new view. For the security of the system in combination with the three-phase protocol of the PBFT protocol, the construction rules of the NEW-VIEW information are designed in a quite complicated way. You can refer to the original paper of PBFT for more details. https://preview.redd.it/x5efdc908m851.png?width=1400&format=png&auto=webp&s=97b4fd879d0ec668ee0990ea4cadf476167a2948 VIEWCHANGE contains a lot of information. For example, C contains 2f+1 signature information, P contains several signature sets, and each set has 2f+1 signature. At least 2f+1 nodes need to send a VIEWCHANGE message before prompting the system to enter the next new view, and that means, in addition to the complex logic of constructing the information of VIEWCHANGE and NEW-VIEW, the communication complexity of the view conversion protocol is $O(n2$.) Such complexity also limits the PBFT protocol to support only a few nodes, and when there are 100 nodes, it is usually too complex to practically deploy PBFT. It is worth noting that in some materials the communication complexity of the PBFT protocol is inappropriately attributed to the full connection between n nodes. By changing the fully connected network topology to the P2P network topology based on distributed hash tables commonly used in blockchain projects, high communication complexity caused by full connection can be conveniently solved, yet still, it is difficult to improve the communication complexity during the view conversion process. In recent years, researchers have proposed to reduce the amount of communication in this step by adopting aggregate signature scheme. With this technology, 2f+1 signature information can be compressed into one, thereby reducing the communication volume during view change.
ABC is moving on with their new "pre-consensus" avalanche tech, which will completely destroy Bitcoin and Satoshi's design, and the POW consensus mechanism.
ABC moving on with their new pre-consensus stuff "avalanche". They claim it will be good for 0-conf but really they are completely changing Satoshi's design and consensus mechanism, bypassing miners and POW, weakening the security of the system. Now if you want to attack the system you just have to attack the "pre-consensus" layer. It is not "pre-consensus", it is a new consensus in place of Satoshi's design. It is the first step to getting rid of miners, which Core and ABC all hate. They would rather move to a system of Proof of Stake (POS), users voting, or a social consensus system that is easily taken over and controlled by the dev team or by state entities. Amaury is also planning to add segwit style MalFix to the ABC codebase, the same as Core did. They already have snuck in centralized checkpoints and 10 block deep reorg protection, without telling miners or users. They even refuse to raise the blocksize, the same as Core and say 22MB blocks are impossible, while coingeek mines 64MB blocks. What a nightmare. BABcoin is fiat 2.0
The biggest problem with bitcoin right now is the fact that as the demand for bitcoins grow the old design is being exposed and it has no mechanism for coping with many transactions leading to huge fees and long wait times for transactions /r/btc
Cosmos is a heterogeneous network of many independent parallel blockchains, each powered by classical BFT consensus algorithms like Tendermint. Developers can easily build custom application specific blockchains, called Zones, through the Cosmos SDK framework. These Zones connect to Hubs, which are specifically designed to connect zones together. The vision of Cosmos is to have thousands of Zones and Hubs that are Interoperable through the Inter-Blockchain Communication Protocol (IBC). Cosmos can also connect to other systems through peg zones, which are specifically designed zones that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. Cosmos does not use Sharding with each Zone and Hub being sovereign with their own validator set. For a more in-depth look at Cosmos and provide more reference to points made in this article, please see my three part series — Part One, Part Two, Part Three (There's a youtube video with a quick video overview of Cosmos on the medium article - https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b)
Polkadot is a heterogeneous blockchain protocol that connects multiple specialised blockchains into one unified network. It achieves scalability through a sharding infrastructure with multiple blockchains running in parallel, called parachains, that connect to a central chain called the Relay Chain. Developers can easily build custom application specific parachains through the Substrate development framework. The relay chain validates the state transition of connected parachains, providing shared state across the entire ecosystem. If the Relay Chain must revert for any reason, then all of the parachains would also revert. This is to ensure that the validity of the entire system can persist, and no individual part is corruptible. The shared state makes it so that the trust assumptions when using parachains are only those of the Relay Chain validator set, and no other. Interoperability is enabled between parachains through Cross-Chain Message Passing (XCMP) protocol and is also possible to connect to other systems through bridges, which are specifically designed parachains or parathreads that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. The hope is to have 100 parachains connect to the relay chain. For a more in-depth look at Polkadot and provide more reference to points made in this article, please see my three part series — Part One, Part Two, Part Three (There's a youtube video with a quick video overview of Polkadot on the medium article - https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b)
Avalanche is a platform of platforms, ultimately consisting of thousands of subnets to form a heterogeneous interoperable network of many blockchains, that takes advantage of the revolutionary Avalanche Consensus protocols to provide a secure, globally distributed, interoperable and trustless framework offering unprecedented decentralisation whilst being able to comply with regulatory requirements. Avalanche allows anyone to create their own tailor-made application specific blockchains, supporting multiple custom virtual machines such as EVM and WASM and written in popular languages like Go (with others coming in the future) rather than lightly used, poorly-understood languages like Solidity. This virtual machine can then be deployed on a custom blockchain network, called a subnet, which consist of a dynamic set of validators working together to achieve consensus on the state of a set of many blockchains where complex rulesets can be configured to meet regulatory compliance. Avalanche was built with serving financial markets in mind. It has native support for easily creating and trading digital smart assets with complex custom rule sets that define how the asset is handled and traded to ensure regulatory compliance can be met. Interoperability is enabled between blockchains within a subnet as well as between subnets. Like Cosmos and Polkadot, Avalanche is also able to connect to other systems through bridges, through custom virtual machines made to interact with another ecosystem such as Ethereum and Bitcoin. For a more in-depth look at Avalanche and provide more reference to points made in this article, please see here and here (There's a youtube video with a quick video overview of Avalanche on the medium article - https://medium.com/ava-hub/comparison-between-avalanche-cosmos-and-polkadot-a2a98f46c03b)
Comparison between Cosmos, Polkadot and Avalanche
A frequent question I see being asked is how Cosmos, Polkadot and Avalanche compare? Whilst there are similarities there are also a lot of differences. This article is not intended to be an extensive in-depth list, but rather an overview based on some of the criteria that I feel are most important. For a more in-depth view I recommend reading the articles for each of the projects linked above and coming to your own conclusions. I want to stress that it’s not a case of one platform being the killer of all other platforms, far from it. There won’t be one platform to rule them all, and too often the tribalism has plagued this space. Blockchains are going to completely revolutionise most industries and have a profound effect on the world we know today. It’s still very early in this space with most adoption limited to speculation and trading mainly due to the limitations of Blockchain and current iteration of Ethereum, which all three of these platforms hope to address. For those who just want a quick summary see the image at the bottom of the article. With that said let’s have a look
Each Zone and Hub in Cosmos is capable of up to around 1000 transactions per second with bandwidth being the bottleneck in consensus. Cosmos aims to have thousands of Zones and Hubs all connected through IBC. There is no limit on the number of Zones / Hubs that can be created
Parachains in Polkadot are also capable of up to around 1500 transactions per second. A portion of the parachain slots on the Relay Chain will be designated as part of the parathread pool, the performance of a parachain is split between many parathreads offering lower performance and compete amongst themselves in a per-block auction to have their transactions included in the next relay chain block. The number of parachains is limited by the number of validators on the relay chain, they hope to be able to achieve 100 parachains.
Avalanche is capable of around 4500 transactions per second per subnet, this is based on modest hardware requirements to ensure maximum decentralisation of just 2 CPU cores and 4 GB of Memory and with a validator size of over 2,000 nodes. Performance is CPU-bound and if higher performance is required then more specialised subnets can be created with higher minimum requirements to be able to achieve 10,000 tps+ in a subnet. Avalanche aims to have thousands of subnets (each with multiple virtual machines / blockchains) all interoperable with each other. There is no limit on the number of Subnets that can be created.
All three platforms offer vastly superior performance to the likes of Bitcoin and Ethereum 1.0. Avalanche with its higher transactions per second, no limit on the number of subnets / blockchains that can be created and the consensus can scale to potentially millions of validators all participating in consensus scores ✅✅✅. Polkadot claims to offer more tps than cosmos, but is limited to the number of parachains (around 100) whereas with Cosmos there is no limit on the number of hubs / zones that can be created. Cosmos is limited to a fairly small validator size of around 200 before performance degrades whereas Polkadot hopes to be able to reach 1000 validators in the relay chain (albeit only a small number of validators are assigned to each parachain). Thus Cosmos and Polkadot scores ✅✅ https://preview.redd.it/2o0brllyvpq51.png?width=1000&format=png&auto=webp&s=8f62bb696ecaafcf6184da005d5fe0129d504518
Tendermint consensus is limited to around 200 validators before performance starts to degrade. Whilst there is the Cosmos Hub it is one of many hubs in the network and there is no central hub or limit on the number of zones / hubs that can be created.
Polkadot has 1000 validators in the relay chain and these are split up into a small number that validate each parachain (minimum of 14). The relay chain is a central point of failure as all parachains connect to it and the number of parachains is limited depending on the number of validators (they hope to achieve 100 parachains). Due to the limited number of parachain slots available, significant sums of DOT will need to be purchased to win an auction to lease the slot for up to 24 months at a time. Thus likely to lead to only those with enough funds to secure a parachain slot. Parathreads are however an alternative for those that require less and more varied performance for those that can’t secure a parachain slot.
Avalanche consensus scan scale to tens of thousands of validators, even potentially millions of validators all participating in consensus through repeated sub-sampling. The more validators, the faster the network becomes as the load is split between them. There are modest hardware requirements so anyone can run a node and there is no limit on the number of subnets / virtual machines that can be created.
Avalanche offers unparalleled decentralisation using its revolutionary consensus protocols that can scale to millions of validators all participating in consensus at the same time. There is no limit to the number of subnets and virtual machines that can be created, and they can be created by anyone for a small fee, it scores ✅✅✅. Cosmos is limited to 200 validators but no limit on the number of zones / hubs that can be created, which anyone can create and scores ✅✅. Polkadot hopes to accommodate 1000 validators in the relay chain (albeit these are split amongst each of the parachains). The number of parachains is limited and maybe cost prohibitive for many and the relay chain is a ultimately a single point of failure. Whilst definitely not saying it’s centralised and it is more decentralised than many others, just in comparison between the three, it scores ✅ https://preview.redd.it/ckfamee0wpq51.png?width=1000&format=png&auto=webp&s=c4355f145d821fabf7785e238dbc96a5f5ce2846
Tendermint consensus used in Cosmos reaches finality within 6 seconds. Cosmos consists of many Zones and Hubs that connect to each other. Communication between 2 zones could pass through many hubs along the way, thus also can contribute to latency times depending on the path taken as explained in part two of the articles on Cosmos. It doesn’t need to wait for an extended period of time with risk of rollbacks.
Polkadot provides a Hybrid consensus protocol consisting of Block producing protocol, BABE, and then a finality gadget called GRANDPA that works to agree on a chain, out of many possible forks, by following some simpler fork choice rule. Rather than voting on every block, instead it reaches agreements on chains. As soon as more than 2/3 of validators attest to a chain containing a certain block, all blocks leading up to that one are finalized at once. If an invalid block is detected after it has been finalised then the relay chain would need to be reverted along with every parachain. This is particularly important when connecting to external blockchains as those don’t share the state of the relay chain and thus can’t be rolled back. The longer the time period, the more secure the network is, as there is more time for additional checks to be performed and reported but at the expense of finality. Finality is reached within 60 seconds between parachains but for external ecosystems like Ethereum their state obviously can’t be rolled back like a parachain and so finality will need to be much longer (60 minutes was suggested in the whitepaper) and discussed in more detail in part three
Avalanche consensus achieves finality within 3 seconds, with most happening sub 1 second, immutable and completely irreversible. Any subnet can connect directly to another without having to go through multiple hops and any VM can talk to another VM within the same subnet as well as external subnets. It doesn’t need to wait for an extended period of time with risk of rollbacks.
With regards to performance far too much emphasis is just put on tps as a metric, the other equally important metric, if not more important with regards to finance is latency. Throughput measures the amount of data at any given time that it can handle whereas latency is the amount of time it takes to perform an action. It’s pointless saying you can process more transactions per second than VISA when it takes 60 seconds for a transaction to complete. Low latency also greatly increases general usability and customer satisfaction, nowadays everyone expects card payments, online payments to happen instantly. Avalanche achieves the best results scoring ✅✅✅, Cosmos with comes in second with 6 second finality ✅✅ and Polkadot with 60 second finality (which may be 60 minutes for external blockchains) scores ✅ https://preview.redd.it/kzup5x42wpq51.png?width=1000&format=png&auto=webp&s=320eb4c25dc4fc0f443a7a2f7ff09567871648cd
Every Zone and Hub in Cosmos has their own validator set and different trust assumptions. Cosmos are researching a shared security model where a Hub can validate the state of connected zones for a fee but not released yet. Once available this will make shared security optional rather than mandatory.
Shared Security is mandatory with Polkadot which uses a Shared State infrastructure between the Relay Chain and all of the connected parachains. If the Relay Chain must revert for any reason, then all of the parachains would also revert. Every parachain makes the same trust assumptions, and as such the relay chain validates state transition and enables seamless interoperability between them. In return for this benefit, they have to purchase DOT and win an auction for one of the available parachain slots. However, parachains can’t just rely on the relay chain for their security, they will also need to implement censorship resistance measures and utilise proof of work / proof of stake for each parachain as well as discussed in part three, thus parachains can’t just rely on the security of the relay chain, they need to ensure sybil resistance mechanisms using POW and POS are implemented on the parachain as well.
A subnet in Avalanche consists of a dynamic set of validators working together to achieve consensus on the state of a set of many blockchains where complex rulesets can be configured to meet regulatory compliance. So unlike in Cosmos where each zone / hub has their own validators, A subnet can validate a single or many virtual machines / blockchains with a single validator set. Shared security is optional
Shared security is mandatory in polkadot and a key design decision in its infrastructure. The relay chain validates the state transition of all connected parachains and thus scores ✅✅✅. Subnets in Avalanche can validate state of either a single or many virtual machines. Each subnet can have their own token and shares a validator set, where complex rulesets can be configured to meet regulatory compliance. It scores ✅ ✅. Every Zone and Hub in cosmos has their own validator set / token but research is underway to have the hub validate the state transition of connected zones, but as this is still early in the research phase scores ✅ for now. https://preview.redd.it/pbgyk3o3wpq51.png?width=1000&format=png&auto=webp&s=61c18e12932a250f5633c40633810d0f64520575
The Cosmos project started in 2016 with an ICO held in April 2017. There are currently around 50 projects building on the Cosmos SDK with a full list can be seen here and filtering for Cosmos SDK . Not all of the projects will necessarily connect using native cosmos sdk and IBC and some have forked parts of the Cosmos SDK and utilise the tendermint consensus such as Binance Chain but have said they will connect in the future.
The Polkadot project started in 2016 with an ICO held in October 2017. There are currently around 70 projects building on Substrate and a full list can be seen here and filtering for Substrate Based. Like with Cosmos not all projects built using substrate will necessarily connect to Polkadot and parachains or parathreads aren’t currently implemented in either the Live or Test network (Kusama) as of the time of this writing.
Avalanche in comparison started much later with Ava Labs being founded in 2018. Avalanche held it’s ICO in July 2020. Due to lot shorter time it has been in development, the number of projects confirmed are smaller with around 14 projects currently building on Avalanche. Due to the customisability of the platform though, many virtual machines can be used within a subnet making the process incredibly easy to port projects over. As an example, it will launch with the Ethereum Virtual Machine which enables byte for byte compatibility and all the tooling like Metamask, Truffle etc. will work, so projects can easily move over to benefit from the performance, decentralisation and low gas fees offered. In the future Cosmos and Substrate virtual machines could be implemented on Avalanche.
Whilst it’s still early for all 3 projects (and the entire blockchain space as a whole), there is currently more projects confirmed to be building on Cosmos and Polkadot, mostly due to their longer time in development. Whilst Cosmos has fewer projects, zones are implemented compared to Polkadot which doesn’t currently have parachains. IBC to connect zones and hubs together is due to launch Q2 2021, thus both score ✅✅✅. Avalanche has been in development for a lot shorter time period, but is launching with an impressive feature set right from the start with ability to create subnets, VMs, assets, NFTs, permissioned and permissionless blockchains, cross chain atomic swaps within a subnet, smart contracts, bridge to Ethereum etc. Applications can easily port over from other platforms and use all the existing tooling such as Metamask / Truffle etc but benefit from the performance, decentralisation and low gas fees offered. Currently though just based on the number of projects in comparison it scores ✅. https://preview.redd.it/4zpi6s85wpq51.png?width=1000&format=png&auto=webp&s=e91ade1a86a5d50f4976f3b23a46e9287b08e373
Cosmos enables permissioned and permissionless zones which can connect to each other with the ability to have full control over who validates the blockchain. For permissionless zones each zone / hub can have their own token and they are in control who validates.
With polkadot the state transition is performed by a small randomly selected assigned group of validators from the relay chain plus with the possibility that state is rolled back if an invalid transaction of any of the other parachains is found. This may pose a problem for enterprises that need complete control over who performs validation for regulatory reasons. In addition due to the limited number of parachain slots available Enterprises would have to acquire and lock up large amounts of a highly volatile asset (DOT) and have the possibility that they are outbid in future auctions and find they no longer can have their parachain validated and parathreads don’t provide the guaranteed performance requirements for the application to function.
Avalanche enables permissioned and permissionless subnets and complex rulesets can be configured to meet regulatory compliance. For example a subnet can be created where its mandatory that all validators are from a certain legal jurisdiction, or they hold a specific license and regulated by the SEC etc. Subnets are also able to scale to tens of thousands of validators, and even potentially millions of nodes, all participating in consensus so every enterprise can run their own node rather than only a small amount. Enterprises don’t have to hold large amounts of a highly volatile asset, but instead pay a fee in AVAX for the creation of the subnets and blockchains which is burnt.
Avalanche provides the customisability to run private permissioned blockchains as well as permissionless where the enterprise is in control over who validates the blockchain, with the ability to use complex rulesets to meet regulatory compliance, thus scores ✅✅✅. Cosmos is also able to run permissioned and permissionless zones / hubs so enterprises have full control over who validates a blockchain and scores ✅✅. Polkadot requires locking up large amounts of a highly volatile asset with the possibility of being outbid by competitors and being unable to run the application if the guaranteed performance is required and having to migrate away. The relay chain validates the state transition and can roll back the parachain should an invalid block be detected on another parachain, thus scores ✅. https://preview.redd.it/li5jy6u6wpq51.png?width=1000&format=png&auto=webp&s=e2a95f1f88e5efbcf9e23c789ae0f002c8eb73fc
Cosmos will connect Hubs and Zones together through its IBC protocol (due to release in Q1 2020). Connecting to blockchains outside of the Cosmos ecosystem would either require the connected blockchain to fork their code to implement IBC or more likely a custom “Peg Zone” will be created specific to work with a particular blockchain it’s trying to bridge to such as Ethereum etc. Each Zone and Hub has different trust levels and connectivity between 2 zones can have different trust depending on which path it takes (this is discussed more in this article). Finality time is low at 6 seconds, but depending on the number of hops, this can increase significantly.
Polkadot’s shared state means each parachain that connects shares the same trust assumptions, of the relay chain validators and that if one blockchain needs to be reverted, all of them will need to be reverted. Interoperability is enabled between parachains through Cross-Chain Message Passing (XCMP) protocol and is also possible to connect to other systems through bridges, which are specifically designed parachains or parathreads that each are custom made to interact with another ecosystem such as Ethereum and Bitcoin. Finality time between parachains is around 60 seconds, but longer will be needed (initial figures of 60 minutes in the whitepaper) for connecting to external blockchains. Thus limiting the appeal of connecting two external ecosystems together through Polkadot. Polkadot is also limited in the number of Parachain slots available, thus limiting the amount of blockchains that can be bridged. Parathreads could be used for lower performance bridges, but the speed of future blockchains is only going to increase.
A subnet can validate multiple virtual machines / blockchains and all blockchains within a subnet share the same trust assumptions / validator set, enabling cross chain interoperability. Interoperability is also possible between any other subnet, with the hope Avalanche will consist of thousands of subnets. Each subnet may have a different trust level, but as the primary network consists of all validators then this can be used as a source of trust if required. As Avalanche supports many virtual machines, bridges to other ecosystems are created by running the connected virtual machine. There will be an Ethereum bridge using the EVM shortly after mainnet. Finality time is much faster at sub 3 seconds (with most happening under 1 second) with no chance of rolling back so more appealing when connecting to external blockchains.
All 3 systems are able to perform interoperability within their ecosystem and transfer assets as well as data, as well as use bridges to connect to external blockchains. Cosmos has different trust levels between its zones and hubs and can create issues depending on which path it takes and additional latency added. Polkadot provides the same trust assumptions for all connected parachains but has long finality and limited number of parachain slots available. Avalanche provides the same trust assumptions for all blockchains within a subnet, and different trust levels between subnets. However due to the primary network consisting of all validators it can be used for trust. Avalanche also has a much faster finality time with no limitation on the number of blockchains / subnets / bridges that can be created. Overall all three blockchains excel with interoperability within their ecosystem and each score ✅✅. https://preview.redd.it/ai0bkbq8wpq51.png?width=1000&format=png&auto=webp&s=3e85ee6a3c4670f388ccea00b0c906c3fb51e415
The ATOM token is the native token for the Cosmos Hub. It is commonly mistaken by people that think it’s the token used throughout the cosmos ecosystem, whereas it’s just used for one of many hubs in Cosmos, each with their own token. Currently ATOM has little utility as IBC isn’t released and has no connections to other zones / hubs. Once IBC is released zones may prefer to connect to a different hub instead and so ATOM is not used. ATOM isn’t a fixed capped supply token and supply will continuously increase with a yearly inflation of around 10% depending on the % staked. The current market cap for ATOM as of the time of this writing is $1 Billion with 203 million circulating supply. Rewards can be earnt through staking to offset the dilution caused by inflation. Delegators can also get slashed and lose a portion of their ATOM should the validator misbehave.
Polkadot’s native token is DOT and it’s used to secure the Relay Chain. Each parachain needs to acquire sufficient DOT to win an auction on an available parachain lease period of up to 24 months at a time. Parathreads have a fixed fee for registration that would realistically be much lower than the cost of acquiring a parachain slot and compete with other parathreads in a per-block auction to have their transactions included in the next relay chain block. DOT isn’t a fixed capped supply token and supply will continuously increase with a yearly inflation of around 10% depending on the % staked. The current market cap for DOT as of the time of this writing is $4.4 Billion with 852 million circulating supply. Delegators can also get slashed and lose their DOT (potentially 100% of their DOT for serious attacks) should the validator misbehave.
AVAX is the native token for the primary network in Avalanche. Every validator of any subnet also has to validate the primary network and stake a minimum of 2000 AVAX. There is no limit to the number of validators like other consensus methods then this can cater for tens of thousands even potentially millions of validators. As every validator validates the primary network, this can be a source of trust for interoperability between subnets as well as connecting to other ecosystems, thus increasing amount of transaction fees of AVAX. There is no slashing in Avalanche, so there is no risk to lose your AVAX when selecting a validator, instead rewards earnt for staking can be slashed should the validator misbehave. Because Avalanche doesn’t have direct slashing, it is technically possible for someone to both stake AND deliver tokens for something like a flash loan, under the invariant that all tokens that are staked are returned, thus being able to make profit with staked tokens outside of staking itself. There will also be a separate subnet for Athereum which is a ‘spoon,’ or friendly fork, of Ethereum, which benefits from the Avalanche consensus protocol and applications in the Ethereum ecosystem. It’s native token ATH will be airdropped to ETH holders as well as potentially AVAX holders as well. This can be done for other blockchains as well. Transaction fees on the primary network for all 3 of the blockchains as well as subscription fees for creating a subnet and blockchain are paid in AVAX and are burnt, creating deflationary pressure. AVAX is a fixed capped supply of 720 million tokens, creating scarcity rather than an unlimited supply which continuously increase of tokens at a compounded rate each year like others. Initially there will be 360 tokens minted at Mainnet with vesting periods between 1 and 10 years, with tokens gradually unlocking each quarter. The Circulating supply is 24.5 million AVAX with tokens gradually released each quater. The current market cap of AVAX is around $100 million.
09-21 18:22 - 'Concur totally - litigation being bought in the name of an anti-censorship mechanism. Sometimes, just got to roll with the punches. Bitcoin wasn't really designed for wannabe hedge fund managers, but if we have to go through...' by /u/ArtofBlocks removed from /r/Bitcoin within 0-4min
Where is the rebuttal to LTB ep 344? Premise: Bitcoin Cash difficulty reset was designed to be broken. BCH was a quickly put together scam to enrich short term miners by a broken reward mechanism. Big block advocates have been suckered into propping up the price of broken/scam software /r/btc
I have ADHD. I was diagnosed at age 12. What happened is I got to middle school, and my life fell apart. It came on like a typhoon. Things seemed alright as I started, but I still remember that October when my family went to sixth-grade check-in. My twin sister went first. The meeting lasted about four minutes. She and my parents left with smiles all around and talk of getting In N Out on the way home. Then it was my turn. Every teacher I had stood in a circle. They seemed...different. One by one, they went around and told me that I was shit. Some were nicer than others, but everyone had the same message to convey: Doesn't complete his homework all the way Distracts others trying to learn Unable to follow along in class Not sure if he can keep up I then heard my grades: C-, D+, C+, A in PE, C, and an F in Social Studies. I don't remember being ashamed or embarrassed or anything. I remember being confused. I had gone to school every day and tried hard and thought I was doing what the teacher asked. Nope. Guess I wasn't. Nobody had much advice for me. They just wanted me to know that I sucked. And that my parents should understand so. I don't know if my parents freaked out or punished me or what. But they weren't happy. The last to go was my social studies teacher, Sven. He asked me if I knew how to read. I politely nodded my head. But he wasn't sure. He talked about all the symptoms he had seen from me. To counter, I pulled a grad-level book on the Cold War off a shelf and read a page aloud while trying not to cry. People were even more confused. Some estimate that a child with ADHD will receive 20,000 more negative comments before the age of 12 than a non-ADHD child will. I can't speak to that exactly, but I can say that this was not the only time I've had a room full of people upset with me for reasons I never saw coming. It doesn't get much easier. Sven caught up to us as we walked to the car. He was cagey with his reasoning, but he told us that there might be something up with my brain. He recommended I get tested by a psychiatrist and see what she had to say. I've since come to my conclusions where he got such an idea. The testing was fun. I've always liked tests. Didn't mention it, but they also thought I couldn't read in 2nd grade. Lol. That one went away after I took a standardized exam and scored in the 99th percentile of the nation in reading. I thought standardized tests were fun, you see. I moved a bunch of colored balls into colored holes and tried to remember what color things were after 10 minutes and everything else you might expect. I didn't know what I was even doing, but I felt I could hang. Three weeks later, I got my results. The only part I remember is that my psychiatrist noted that in her entire career, she had never met someone who scored higher on specific tasks and yet lower on others. My chart looked like OJ Simpson’s polygraph. I could keep going, and in another article, I will. But this is how I got diagnosed. And the key to all of it was Sven. Everything makes perfect sense after the fact, but only when you realize that a single teacher served as the link that completes the narrative. I do not know where I am today without him. I got lucky that this story takes place in 2003, and at a private school with teachers who genuinely cared about me. For reasons a lawyer in the comments needs to help me understand better, public school teachers seem loath to alert students of disabilities of any kind. This includes ADHD but also things like autism, dyslexia, and mood disorders. Things that seem apparent to me in a way that makes it seem impossible that no other teacher in the past 13 years hasn’t also picked up on them. That means many students go through primary schooling while having no idea they have a problem at all. When I mention to a student they might have ADHD, they are first confused, but then some memories come back. The first is that someone, usually a sports or music coach, had once told them the same thing. The other is that they remember a lot of teachers saying weird stuff they didn't understand at the time. Stuff like, "You’re so talented. I just wish you could be better focused. Have you talked to anyone about why you could be having trouble?" To me, those sound like hints from a teacher who has been told by her bosses not to put the school at risk. I am not a teacher. I'm a private consultant and can pretty much say whatever I want. I am also not a doctor - people would die - but I am a concerned adult who has taken courses in spotting learning disabilities. I'm also someone who will do absolutely anything to make sure his students have the best chance for success now and in the future. I'm also someone who asked both my ADHD-psychiatrist (hi!) and ADHD-therapist (hi!!!!!) if I had the right to tell students if I suspected something; they both went, Ya, dude. Totally. So I try to be Sven. I try to pay attention to what my students do and say and provide feedback that can help them. I'd like to note what that feedback is here to make sure people don't miss it because my pieces go on for way too long. If you are a high school student who suspects he or she has ADHD, your best course of action is to talk with your parents and look into being tested by a professional psychiatrist who specializes in the topic. These tests are expensive, and mental health insurance in America sucks balls. But this is the fastest, most straightforward route to getting the help you need. Option two is to try and work with/through your public high school to get them to pay for it.This site has some good info. My guess is that this method will suck. Public schools don't have a lot of funding and will not want to spend it on you. That's not your problem. You will almost certainly need your parents to back you up on this one and sit through a lot of boring meetings. I assume a lot of people will tell you a lot of reasons why they can't help you. Your response every time should be some version of, "Sure. But I need help with this. And I'm not going to stop until I get the support I need. So what do I do from here?" Then you blankly stare at them and refuse to leave until they get you at least to the next step. I'm not sure how well this will work. If you do attempt or have attempted this method, please DM me or contact my Email with your experience. I want to know if this is even worth my student's time. If you can not afford traditional testing or do not feel your parents would support such testing, your best option is to wait until the day you turn 18 and then register for a telehealth company specializing in ADHD. The one I use and recommend isHelloAhead.com. They're neat. They do not take traditional insurance, but their rates are much lower than most doctors. They are cheap enough that I feel an average 18-year old who wants help could find a way to afford it on his or her own. The downside with these sites is the waiting times can be long. Took me like five months. Other such sites are popping up, and while I can't vouch for them, they all seem to offer a similar service. Those paragraphs are what I want every student here to know. I'm much more comfortable having a trained doctor tell you what the deal is than I am trying to do it myself. But I have to see something if I want to be Sven. The question then is, how do I see it? For spotting ADHD, it's shockingly simple. And I'll get to the real reason at the end. But for now, here is what I see when I see a student with ADHD. The best way I can describe their lives is "endless chaos" The chaos isn't always bad! Rarely it's fun chaos, but often it's just chaos chaos. This chaos exists in both physical and mental forms. Physical: Their shit is such a mess. Everything. Most of the work we do is digital, so I see the Google Doc version of their mind. Folders make no sense. Things are labeled inaccurately or not at all. Schools get combined, or separated, or forgotten altogether. It is not a single type of error, but instead a collection of small mistakes and poor decisions that make the work impossible to corral. I have some kids that are messy or lazy, but this is different. It's like if the original folder system I built for them was an amoeba in a petri dish. Leave that dish out for a weekend and come back. The patterns will be remarkably similar to the organizational gore that they then try to utilize. Mental: There's always a story. "I was late because my car has a flat tire, and the guy was late, so I had to take an Uber." "I didn't know my music essays were due a month early because the form only mentioned there being a recital." "My friend is mad at me, but it's only because she didn't tell me we were the first group presenting, so I spent more time preparing our project". These stories make sense at first. But after a few weeks, they start to pile up. Then I become the one hearing a story about why they didn't do what I wanted, and I stop being so forgiving. ADHD is a neurological disorder. Not a mental illness. It's closer to diabetes than it is bi-polar. "ADHD" is a fairly garbage name for the condition because A) it has a stigma, and B) it isn't even accurate. Both attention deficit and hyperactivity are symptoms of ADHD, but they are not the problem itself. It would be like calling clinical depression "low energy and excessive guilt disorder". ADHD is actually an issue involving improper dopamine regulation in the brain combined with under-activity of the brain's executive function component. The executive function center is the part of your brain that is in charge of making sure all the other parts of your brain play nice and communicate. When the executive function center breaks down...those other parts don't. The result is a failure to plan or coordinate + a need for impulsive stimulation, thus resulting in endless chaos. This is what I’ll ask you if you DM me, btw. Is your life endless chaos? Sometimes do you like the chaos? Sometimes do you get bored and create the chaos yourself just to see what might happen? But when that chaos stops being so fun, can you make it stop? They're very, very intelligent You've probably heard about the "gifted ADHD genius" thing before. I don't think it exists. My theory has always been that the "gifted ADHD child" is a victim of survivorship bias. The research states that ADHD has either no or a negative correlation with intelligence. There is also a startling overlap with ADHD and incarceration. This means that students who still manage to succeed despite their disorder tend to have advantages that keep them in the game. Namely that they're smart as hell. The other saving grace is that they come from secure support networks that prevent them from unraveling completely. I've heard from such students that their mom or dad works tirelessly to keep their life in order and to make sure they're getting things done. I do not think it is a coincidence that when ADHD students leave for college, things often fall apart. The fact that there are ADHD kids that others know and still like makes some think ADHD isn't so bad or comes with natural cognitive advantages. Those same people do not become friends with the ADHD dumb kids who would disprove those perceptions. Do you remember that kid in elementary school who was his own worst enemy? He never had friends, and everyone was kind of afraid to even talk with him? He was kind of a bully but mostly just awful? He invited you to his house one time, but your mom wouldn’t let you go? That is my best guess of what a dumb kid with ADHD is like. It sounds cold writing it, but you know which kid I'm talking about right now. Where do you think that kid is today? I end up with the smart ones—the ones with parents who care. And God damn are these kids smart. They're brilliant, and funny, and likable, and charming. They have something different about them that makes them undeniable. And it's not just me. I worry I play them up too much in my mind, but then I chat with a teacher or coach of theirs. It's always the same thing: Oh, she's brilliant. She can be so frustrating sometimes, tho. They can be so frustrating sometimes, tho The word is frustrating. Now bad. Not nasty. Not unlikeable. Frustrating. I have some students I just don't like that much (no, not you). What tends to be the common theme with them is that they don't have much interest in my help and display a work ethic to match. On the other spectrum are the world beaters (totally you). These kids kick ass and not only follow my advice but often take that advice to the next level in ways that awe and inspire me. And then there are the kids I think have ADHD. They don't do stuff all the time. They don't finish an essay, or they forget to spell check like I asked, or they write about something that has nothing to do with the outline we built the week before. That's not necessarily the frustrating part. You kids are 17; you make mistakes. Early on, I try to spot these mistakes and point them out. Even the students who don't like me seem to get my point after enough prodding and the problem goes away. With these kids, the problem does not go away. Or if it does, another problem pops right back up to replace it. It makes me feel like there's nothing I can do. It would be easier if the student was just a brat. Then I could either become a brat myself or mentally check out because "hey man, your future”. I need a name for kids I suspect have ADHD…"MaybeHD"? Ya. That’s super funny. Say it out loud and try not to laugh. But these MaybeHD kids do like me. And they do want to get into school. And they do feel bad when I get upset with them. I end up in long, drawn-out conversations with them about why this is important and why they need to make specific work a priority to get into the schools they want to go to. Then they nod meekly and head home. Then they come back next week, and it's the same story. Frustrating. They are randomly awesome at the weirdest things I love weird talents. Things that no one offers up immediately, but then you're chatting, and it comes up naturally. "Oh ya, I love animals! I raise baby pigs in my backyard!" "You do?" "Ya!" At some point, the MaybeHD kid read something or watched a Youtube video that he or she liked. Then they wanted to try it. Six months later, they're making 4k a month selling custom bathrobes on Etsy. There's rarely any logic. "Do you like baths? Or making clothing? "Not really. I just thought it looked fun, so I bought a sewing kit and started making things." There is a noted link between ADHD and entrepreneurship. I see it with my MaybeHD students. They have an insatiable drive and passion for following up on curiosities that other students don't possess. Passion is the wrong word. They have obsessions with mastering concepts in a way that feels beyond their control. The obsession itself drives them to be great. The literature on the subject is cloudy. But there exists a term in ADHD circles called "Hyperfocus". If you know what "flow" is, it's kind of like that. Only more intense and less controllable. I often see the remnants of past hyperfocuses in their stories. They used to run that pig farm. They used to sell bathrobes. They used to be really into getting good grades at school. But then one day, just as quickly as they picked the skill up, they dropped it. They can seldom tell me why. Their priorities are completely out of whack The downside of hyperfocus is that it can be so all-encompassing that other priorities fall by the wayside. One of my favorite students ever is named Elleway. We chatted in our first meeting, and I was instantly intrigued by her background. She said she had designed and prototyped a unit that would automatically roll under parked electric cars for hands-free charging. I hear a lot of impressive stuff in my job, and a lot of it ends up being not that impressive. But then Elleway showed me the prototype video she made back when she was a high school freshman and it blew my mind. https://youtu.be/Y5Ap2uMbWL4 Can you do that? I sure as hell can't. She wasn't even an engineer. She calmly explained that she had partnered with several older male engineers who had helped turn her idea into reality. Then she had done all the promotional and marketing work herself. Then she got second out of 300 students at a young entrepreneur contest held at Columbia University. Shortly after, a tech CEO came up to her and asked if she would like to work with him to file a patent for the invention. She agreed and is now a trademark holder. That was all in our first 10 minutes. She then went on to share the half dozen corporations she had worked for. And the three businesses she started. And the graphic design work she made for her website. She told me how she was a Nationally ranked fencer until she lost interest. She was now merely a Nationally ranked golfer. Then I saw she had a 2.9 GPA and thus zero shot at getting into NYU like she hoped. I did not initially think Elleway had ADHD. I thought she was a pathological liar. It seemed impossible to me that this same girl who had already taken a grip on the world was then unable to keep up her grades in math. That just isn’t how any -any- of my other ultra high-achieving students behave. Then Elleway showed me pictures of her casually hanging out with Andrew Yang. And then her LinkedIn With a lot of people who do not accept your request unless they want to. I had to figure out what the hell led to all this. Elleway’s patent and ambition to work on it had taken up all her time. She was so singularly focused on doing what she cared about that the world behind her didn't seem to exist. She was hyperfocused on a goal, but once she reached it, she woke up to a reality that punished her for ignoring everything else. That's the longing writer's version of the story. The more popular one is that she didn't give a shit about school, was warned repeatedly about the consequences, and ignored them. She got what she deserved. That’s the version the rest of the world had for her. It goes back to frustrating. I've gotten kids into NYU that don't show a fifth the potential that Elleway did. Those kids went to all the camps their parents paid for and entered competitions with a tech doorbell or something lame, and they're just fine. But MaybeHD students are often world-beaters in ways that make them seem so special. They talk endlessly not just about what they're into but how they figured it all out and why it is all so important to them. I believe them, and I want to fight for them. So I give them as much assistance as I possibly can. But then they don't do the increasingly easy tasks I ask for them to complete. Then they suffer the consequences. Elleway didn't get into NYU. She didn't get in much of anywhere. It eats me up inside, and I feel like I failed her. I don't know how many other people in my position would feel the same way. That's why I have to be Sven. This is getting long, and I'm getting depressed. Here's the TL: DR of what I see when I see a student with ADHD ... Me. I see me. And it can hurt really bad knowing what a condition like ADHD does to a young person's life. My life is endless chaos. I've been out of food for nine days. My house looks like Badger from Breaking Bad bought a loft in Palo Alto. I am still writing this at 3:25 AM when I have to be up for work at nine. My cat has started doing this thing where she sleeps in her food bowl when it gets empty. It's equal parts adorable and humiliating. I'm smart as shit. I know it. I made up half-ideas. That article is absolute fire. I got published on Cracked.com five times in 2011 when that meant something. I went to Tulane on a half-ride merit scholarship, used to win creative writing contests, and have done a bunch of other writery stuff that made people stand up and go, "Woah". But I only made it to college because my mom carried me there, kicking and screaming. She packaged my life together, and I held on for the ride. Then I got to school and made it two months before she got an Email alerting her that Tulane was planning to revoke the remaining $70,000 of my $80,000 scholarship due to my grades. I barely scraped by and survived. But the shame and frustration in her voice when she read me that letter over the phone haunts me to this day. I analyze handwriting. And I turned a Reddit account into a successful business in four months. And I collect college T-shirts from schools my students go to. And I own Bitcoin I bought in 2011 for $4.50 each. And I'm teaching myself piano with a video game. And I'm exercising with a video game. And I'm ranked 42nd in Northern California at Super Smash Bros Ultimate. And I’ve tried the nachos at over 100 Taquerias in the Bay Area. And I own a really cute cat. But I've spent 15* hours this week writing this instead of a sequel to that Costco piece. I have one coming where I edit my Common App essay from 2009. It's a great idea and a great article. One that will drive significantly more business to my site than this piece will. Hell, I predict this piece is likely to lose me business because I come off like a mess in it. But it's what I want to write, so I feel like I have no choice. *The 15 hours is a guess. I have no idea how long it takes me to write and edit these things. I start typing and X hours later look up and realize how hungry I am and how much I need to pee. The writing controls me. I see myself in my MaybeHD students. I see their unfettered curiosity and flair for taking as much good from the world as possible. I see their infectious enthusiasm and ability to quickly forgive others because they know too well how it feels to want forgiveness themselves. Yet I also see their inattention to detail, their weak excuses, and their general confusion that makes me realize they couldn't fix some problems if their lives depended on it. I see their sadness and shame when those mistakes pile up. I see when the chaos stops being fun, and they want out, but they don't know how. I don't know what I, as their consultant, can do. But as Sven, I can recommend they go talk to someone else... Hey, so, I was considering hiring you and all...but you seem kind of bad. Why should I trust you? Because a couple of years ago, I got back on my medication and turned my life around. You aren't reading this if I don't reach out for help and trust a trained psychiatrist to guide me. There are no groups of friends in Delaware or Connecticut comparing their half-ideas lists. There sure as shit isn't a CollegeWithMattie.com. I still have ADHD. But one of the greatest things about ADHD is that it is -without rival- the most treatable form of mental illness or dysfunction known to man. It is not curable, but there are endless medical and non-medical options available for those willing to reach out and get the help they need. My story is that it was only by getting re-medicated that I then could learn and use coping mechanisms that allow me to achieve the type of life I've always wanted. Christ, 4,400 words. You know, I'm also submitting this for a class I'm in. That's why all the backlinks are to actual sources instead of links herding you into my website. Hi Amy! That's one more thing. ADHD people are hyper-efficient...Kind of. Alright. If you're still here reading this, you might be suspecting some things about yourself. My DMs are open if you want to chat, but again, I am not a doctor. I will say that right now, as you prepare to head to college, is a really good time to get this all figured out. College is a giant reset button on your life. Figure these problems out now so that by the time you head off for your next chapter, you will have given yourself the best possible chance to succeed. Endless chaos. Here is the bold part again: If you are a student in high school who suspects he or she has ADHD, your best course of action is to talk with your parents and look into being tested by a professional psychiatrist who specializes in the topic. These tests are expensive, and mental health insurance in America (still) sucks balls. But this is the fastest, most straightforward route to getting the help you need. Option two is to try and work with/through your public high school to get them to pay for it.This site has some good info. My guess is that this method will kind of suck. Public schools don't have a lot of funding and will not want to spend it on you. That's not your problem. You will almost certainly need your parents to back you up on this one and sit through a lot of boring meetings. I assume a lot of people will tell you a lot of reasons why they can't help you. Your response every time should be some version of, "Sure. But I need help with this. And I'm not going to stop until I get the support I need. So what do I do from here?" Then you blankly stare at them and refuse to leave until they get you at least to the next step. This will suck and I'm not sure how well it will work. If you do attempt or have attempted this method, please DM me or contact my Email with your experience. I want to know if this is even worth my student's time. If you can not afford traditional testing, or if you do not feel your parents would support such testing, your best option is to wait until the day you turn 18 and then register for a telehealth company that specializes in ADHD. The one I use and recommend isHelloAhead.com. They're neat. They do not take traditional insurance, but their rates are much lower than most doctors. They are cheap enough that I feel an average 18-year old who wants help could find a way to afford it on his or her own. The downside with these sites is the waiting times can be really long. Took me like five months. Other such sites are popping up, and while I can't vouch for them, they all seem to offer a similar service. Update: The lines aren't that long anymore! Monday was Elleway's 18th birthday. She sent me a screengrab of her upcoming Ahead appointment in early September. She told me she spent the entire day crying because all her friends were going off to great schools and that she was stuck at home. I've told Elleway that I plan to help her reapply to NYU this year. I doubt I will ever want to see another student succeed as much as I will with her.
Ultimate glossary of crypto currency terms, acronyms and abbreviations
d down, k up, everybody's a game theorist, titcoin, build wiki on Cardano, (e-)voting, competitive marketing analysis, Goguen product update, Alexa likes Charles, David hates all, Adam in and bros in arms with the scientific counterparts of the major cryptocurrency groups, the latest AMA for all!
Decreasing d parameter Just signed the latest change management document, I was the last in the chain so I signed it today for changing the d parameter from 0.52 to 0.5. That means we are just about to cross the threshold here in a little bit for d to fall below 0.5 which means more than half of all the blocks will be made by the community and not the OBFT nodes. That's a major milestone and at this current rate of velocity it looks like d will decrement to zero around March so lots to do, lots to talk about. Product update, two days from now, we'll go ahead and talk about that but it crossed my desk today and I was really happy and excited about that and it seemed like yesterday that d was equal to one and people were complaining that we delayed it by an epoch and now we're almost at 50 percent. For those of you who want parameter-level changes, k-level changes, they are coming and there's an enormous internal conversation about it and we've written up a powerpoint presentation and a philosophy document about why things were designed the way that they're designed. Increasing k parameter and upcoming security video and everybody's a game theorist My chief scientist has put an enormous amount of time into this. Aggelos is very passionate about this particular topic and what I'm going to do is similar to the security video that I did where I did an hour and a half discussion about a best practice for security. I'm going to actually do a screencasted video where I talk about this philosophy document and I'm going to read the entire document with annotations with you guys and kind of talk through it. It might end up being quite a long video. It could be several hours long but I think it's really important to talk around the design philosophy of this. It's kind of funny, everybody, when they see a cryptographic paper or math paper, they tend to just say okay you guys figure that out. No one's an expert in cryptography or math and you don't really get strong opinions about it but game theory despite the fact that the topics as complex and in some cases more complex you tend to get a lot of opinions and everybody's a game theorist. So, there was enormous amount of thought that went into the design of the system, the parameters of system, everything from the reward functions to other things and it's very important that we explain that thought process in as detailed of a way as possible. At least the philosophy behind it then I feel that the community is in a really good position to start working on the change management. It is my position that I'd love to see k largely increased. I do think that the software needs some improvements to get there especially partial delegation delegation portfolios and some enhancements into the operation of staking especially. E-voting I'd love to see the existence of hybrid wallets where you have a cold part a hot part and we've had a lot of conversations about that and we will present some of the progress in that matter at the product updates. If not this October certainly in November. A lot of commercialization going along, a lot of things going on and flowing around and you know, commercial teams working hard. As I mentioned we have a lot of deals in the pipeline. The Wyoming event was half political, half sales. We were really looking into e-voting and we had very productive conversations along those lines. It is my goal that Cardano e-voting software is used in political primaries and my hope is for eventually to be used in municipal and state and eventually federal elections and then in national elections for countries like Ethiopia, Mongolia and other places. Now there is a long road, long, long road to get there and many little victories that have to begin but this event. Wyoming was kind of the opener into that conversation there were seven independent parties at the independent national convention and we had a chance to talk to the leadership of many of them. We will also engage in conversation with the libertarian party leadership as well and at the very least we could talk about e-voting and also blockchain-based voting for primaries that would be great start and we'll also look into the state of Wyoming for that as well. We'll you know, tell you guys about that in time. We've already gotten a lot of inquiries about e-voting software. We tend to get them along with the (Atala) Prism inquiries. It's actually quite easy to start conversations but there are a lot of security properties that are very important like end-to-end verifiability hybrid ballots where you have both a digital and a paper ballot delegation mechanics as well as privacy mechanics that are interesting on a case-by-case basis. Goguen, voting, future fund3, competitive marketing analysis of Ouroboros vs. EOS, Tezos, Algorand, ETH2 and Polkadot, new creative director We'll keep chipping away at that, a lot of Goguen stuff to talk about but I'm going to reserve all of that for two days from now for the product update. We're right in the middle, Goguen metadata was the very first part of it. We already have some commercialization platform as a result of metadata, more to come and then obviously lots of smart contract stuff to come. This update and the November update are going to be very Goguen focused and also a lot of alternatives as well. We're still on schedule for an HFC event in I think November or December. I can't remember but that's going to be carrying a lot of things related multisig token locking. There's some ledger rule changes so it has to be an HFC event and that opens up a lot of the windows for Goguen foundations as well as voting on chain so fund3 will benefit very heavily from that. We're right in the guts of Daedalus right now building the voting center, the identity center, QR-code work. All this stuff, it's a lot of stuff, you know, the cell phone app was released last week. Kind of an early beta, it'll go through a lot of rapid iterations every few weeks. We'll update it, google play is a great foundation to launch things on because it's so easy to push updates to people automatically so you can rapidly iterate and be very agile in that framework and you know we've already had 3500 people involved heavily in the innovation management platform ideascale and we've got numerous bids from everything. From John Buck and the sociocracy movement to others. A lot of people want to help us improve that and we're going to see steady and systematic growth there. We're still chipping away at product marketing. Liza (Horowitz) is doing a good job, meet with her two three-times a week and right now it's Ouroboros, Ouroboros, Ouroboros... We're doing competitive analysis of Ouroboros versus EOS, Tezos, Algorand, ETH2 and Polkadot. We think that's a good set. We think we have a really good way of explaining it. David (David Likes Crypto now at IOHK) has already made some great content. We're going to release that soon alongside some other content and we'll keep chipping away at that. We also just hired a creative director for IO Global. His name's Adam, incredibly experienced creative director, he's worked for Mercedes-Benz and dozens of other companies. He does very good work and he's been doing this for well over 20 years and so the very first set of things he's going to do is work with commercial and marketing on product marketing. In addition to building great content where hope is make that content as pretty as possible and we have Rod heavily involved in that as well to talk about distribution channels and see if we can amplify the distribution message and really get a lot of stuff done. Last thing to mention, oh yeah, iOS for catalyst. We're working on that, we submitted it to the apple store, the iOS store, but it takes a little longer to get approval for that than it does with google play but that's been submitted and it's whenever apple approves it or not. Takes a little longer for cryptocurrency stuff. Wiki shizzle and battle for crypto, make crypto articles on wiki great again, Alexa knows Charles, Everpedia meets Charles podcast, holy-grail land of Cardano, wiki on Cardano, titcoin Wikipedia... kind of rattled the cage a little bit. Through an intermediary we got contact with Jimmy Wales. Larry Sanger, the other co-founder also reached out to me and the everpedia guys reached out to me. Here's where we stand, we have an article, it has solidified, it's currently labeled as unreliable and you should not believe the things that are said in it which is David Gerard's work if you look at the edits. We will work with the community and try to get that article to a fair and balanced representation of Cardano and especially after the product marketing comes through. We clearly explain the product I think the Cardano article can be massively strengthened. I've told Rod to work with some specialized people to try to get that done but we are going to work very hard at a systematic approval campaign for all of the scientific articles related to blockchain technology in the cryptocurrency space. They're just terrible, if you go to the proof of work article, the proof of stake or all these things, they're just terrible. They're not well written, they're out of date and they don't reflect an adequate sampling of the science. I did talk to my chief scientist Aggelos and what we're gonna do is reach out to the scientific counterparts that most of the major cryptocurrency groups that are doing research and see if they want to work with us at an industry-wide effort to systematically improve the scientific articles in our industry so that there are a fair and balanced representation of what the current state of the art are, the criticisms, the trade-offs as well as the reference space and of course obviously we'll do quite well in that respect because we've done the science. We're the inheritor of it but it's a shame because when people search proof of stake on google usually wikipedia results are highly biased. We care about wikipedia because google cares about wikipedia, amazon cares about wikipedia. If you ask Alexa who is Charles Hoskinson, the reason why Alexa knows is because it's reading directly from the wikipedia page. If I didn't have a wikipedia page Alexa would know that so if somebody says Alexa what is Cardano it's going to read directly from the wikipedia page and you know and we can either just pretend that reality doesn't exist or we can accept it and we as a community working with partners in the broader cryptocurrency community can universally improve the quality of cryptocurrency pages. There's been a pattern of commercial censorship on wikipedia for cryptocurrencies in general since bitcoin itself. In fact I think the bitcoin article is actually taken down once back in, might have been, 2010 or 2009 but basically wikipedia has not been a friend of cryptocurrencies. That's why everpedia exists and actually their founders reached out to me and I talked to them over twitter through PMs and we agreed to actually do a podcast. I'm going to do a streamyard, stream with these guys and they'll come on talk all about everpedia and what they do and how they are and we'll kind of go through the challenges that they've encountered. How their platform works and so forth and obviously if they want to ever leave that terrible ecosystem EOS and come to the holy-grail land of Cardano we'd be there to help them out. At least they can tell the world how amazing their product is and also the challenges they're having to overcome. We've also been in great contact with Larry Sanger. He's going to do an internal seminar at some point with with us and talk about some protocols he's been developing since he left wikipedia specifically to decentralize knowledge management and have a truly decentralized encyclopedia. I'm really looking forward to that and I hope that presentation gives us some inspiration as an ecosystem of things we can do. That's a great piece of infrastructure regardless and after we learn a lot more about it and we talk to a lot of people in ecosystem. If we can't get people to move on over, it would be really good to see through ideascale in the innovation management platform for people to utilize the dc fund to build their own variant of wikipedia on Cardano. In the coming months there will certainly be funding available. If you guys are so passionate about this particular problem that you want to go solve it then I'd be happy to play Elon Musk with the hyperloop and write a white paper on a protocol design and really give a good first start and then you guys can go and try to commercialize that technology as Cardano native assets and Plutus smart contracts in addition to other pieces of technology that have to be brought in to make it practical. Right now we're just, let's talk to everybody phase, and we'll talk to the everpedia guys, we're going to talk to Larry and we're going to see whoever else is in this game and of course we have to accept the incumbency as it is. So, we're working with obviously the wikipedia side to improve the quality of not only our article but all of the articles and the scientific side of things so that there's a fair and accurate representation of information. One of the reasons why I'm so concerned about this is that I am very worried that Cardano projects will get commercially censored like we were commercially censored. So, yes we do have a page but it took five years to get there and we're a multi-billion dollar project with hundreds of thousands of people. If you guys are doing cutting-edge novel interesting stuff I don't want your experience to be the same as ours where you have to wait five years for your project to get a page even after government's adopted. That's absurd, no one should be censored ever. This is very well a fight for the entire ecosystem, the entire community, not just Cardano but all cryptocurrencies: bitcoin, ethereum and Cardano have all faced commercial censorship and article deletions during their tenure so I don't want you guys to go through that. I'm hoping we can prove that situation but you know you don't put all your eggs in one basket and frankly the time has come for wikipedia to be fully decentralized and liberated from a centralized organization and massively variable quality in the editor base. If legends of valor has a page but Cardano didn't have one until recently titcoin, a pornography coin from 2015, that's deprecated, no one uses it, has a page but Cardano couldn't get one there's something seriously wrong with the quality control mechanism and we need to improve that so it'll get done.
Wholeheartedly willing to get downvoted, but this RMT obsession has to stop.
This sub hasn't got a clue, I swear. Huge sweeping changes to the game mechanics are a terrible way to combat RMT. It's basically an admission that your anti-cheat doesn't work. Most MMOs suffer in some way from an RMT problem; WoW, Runescape, even Destiny 2 has RMT issues if you just look. Thing is, the anticheat in those games actually works worth a damn, so the entire playerbase doesn't have to suffer from endless tinkering with in-game systems. Before you hit me with 'it's a hardcore game, deal with it, it's supposed to be grindy', just stop. Just don't bother. I've heard it time and time again, and it's bullshit. You know it's bullshit just as well as I do. The changes BSG have been making recently to nerf all forms of progression only make the game 'more hardcore' for people who work full time and don't have the same amount of *time* as streamers who dedicate their entire life to this game. That's not 'hardcore'. The game's difficulty mechanically is 'hardcore' and always have been, and I love it. These changes, though, in my eyes, are just time-wasting for the sake thereof. Since when does the amount of time one has to invest in a game define how fucking hardcore it is? Would you describe WoW as more hardcore than Tarkov because of how long you have to play to progress? Or perhaps beating all three Witcher games back to back is 'hardcore' because it took a long time. Are ARMA or DCS inherently less hardcore than Tarkov because an operation can be completed in an afternoon? No, judging how hardcore a game is by the amount of time one has to invest in it is a joke. *No game* should give enormous *mechanical advantages* to those with more time on their hands. There's already an inherent skill advantage that comes from that amount of practice, designing the mechanics to also reward only those with that much time is a kick in the teeth to all the people who love this game but can't invest that level of time. And yeah, you can go ahead and say 'ummm actualllly it's a beta, so they can do what they like, stop whining', and yep. Yes, they can. You're correct. However, comma, that doesn't mean I have to pretend to like it. Yes, I did buy EoD and no, I don't regret it because of all the fun I've had til now. But suggesting people who don't like the current direction the game is going in aren't allowed to voice their opinion because the game's in beta is fucking ludicrous. What do you think the purpose of a playable beta is? Nikita is more than welcome to ignore all the people who don't like these new changes, but what gives people on this sub the right to tell me that I'm not entitled to an opinion on the product I've chosen to financially support. It's such a toxic, capital-G Gamer attitute to suggest that 'Tarkov is OUR game because we're willing to dump several full days a week into grinding for our Bitcoin farms. You should just go and play something else, this clearly isn't a game for you. Go play Call of Duty.' I shouldn't even have to express how utterly reductive and childish that is. Grow up. I'm getting HUGE red flags with the way this game is currently going, because it's all too similar to a game I used to love, The Culling. That game blew up on launch and a bunch of high profile streamers suggested changes to the game, and the devs went ahead and implemented all of them without so much as *thinking* about how they'd affect the average player. Look at where that game is now. Servers shut down, because the average player simply stopped having fun. I'm not saying BSG is even close to that bad, but this endless tinkering with mechanics for the nebulous, vague purpose of 'RMT' has to stop or I don't know if the 'little guys' are gonna stick around much longer. EDIT: I AM AWARE THAT RMT != CHEATING. But cheating is what makes RMT viable. RMTers need to keep items in supply, and to do that, they cheat. It's much more profitable. Ergo, if you stamp out cheaters, the RMT problem becomes significantly diminished. EDIT 2: u/ArxMessor makes a great point that Tarkov is an MMO and therfore should have some kind of grind. I agree. However, most MMOs use systems like weekly bounties etc to ensure even players with only maybe 10 hours a week to invest in the game can still keep up and compete. Tarkov currently rewards time investment *exponentially* which removes all possibility of catching up. EDIT 3: Yep, my DMs right now are very much confirming the things I said above about a certain subset of this community. Thanks, Gamers. EDIT 4: I get it, Destiny anti-cheat is ass. I made a mistake there, since I don't play Trials of Osiris. However, do you see Bungie making the win requirement for Trials 50 wins instead of 9 or whatever just to slow down the hackers? Of course not, because it hurts normal players more. Edit 5: My first gold! Thanks kind stranger.
A theory of why Ethereum is perhaps better "sound money" than Bitcoin.
The idea of Bitcoin's supremacy as "sound money" is very frequently thrown around by the biggest talking heads in the crypto world. I know I will get a lot of hate for suggesting that this theory is not only flawed, but it is straight up wrong. As unintuitive as it may sound to Bitcoin maximalists (no offense intended) I believe Ethereum is on the path to becoming the global leading asset and model for sound money... give me a chance to explain why.
The idea that nothing can change Bitcoin's issuance schedule is a myth. There is absolutely no divine power controlling the supply of Bitcoin. Contrary to what is commonly asserted, Bitcoin's issuance protocol is not primarily driven by what is currently implemented. The real driver is consensus: the majority of network participants must agree that what is currently defined cannot be changed. There is an underlying assumption that the consensus would never want to change Bitcoin's issuance. On the surface this makes for a nice "sound money" narrative, but it is false premise and sticking to it could be ultimately detrimental. It presents a long term sustainability issue (the hope that somehow Bitcoin's base layer will scale enough to maintain security entirely through fees). It also completely dismisses the possibility that an unforeseen event could create pressure to change the issuance. If Bitcoin managed to create a consensus mechanism that did not rely on mining, it is very likely there would be consensus to reduce issuance. On the other hand, if some potentially catastrophic event would create incentives to increase the issuance, it would only make sense for the network to do so.
Issuance flexibility is not fundamentally bad. Etheruem's approach to adjust the issuance according to the contextual circumstances has resulted in a faster rate of issuance reduction than what was originally defined in the protocol. The rate of issuance will continue to decrease as new developments allow for it to happen without compromising the network security. There is a very high probability that Ethereum will achieve a lower issuance rate than Bitcoin in the next two years, and it could possibly achieve zero issuance in the next five years. This would be a result of a successful implementation of PoS, sharding and EIP-1559.
The root of all evil is Proof of Work. PoW is by far the primary cost of operating the Bitcoin network. It is the primary determinant of how much issuance is needed as a financial incentive to keep miners doing their thing. The very mechanism that secures the network's decentralization is unfortunately quite wasteful. The degree of decentralization is a direct result of how much random mathematical operations are being done by miners.
There is a better way. Some people will take offense by the use of the word wasteful, and they claim that it is not because those mindless calculations are what is actually securing the network. However, its wasteful aspect becomes clear if there is a different way to achieve equal or superior decentralization without the need to crunch difficult computational problems. This just so happens to be embodied in Ethereum's design of Proof of Stake. It will drastically reduce the cost of securing the network, while providing at least 2-3% annual returns for the ownership of Ether. When Ethereum's issuance becomes lower than its staking rewards, it will effectively have achieved the same effect as having zero (or possibly negative) issuance.
The value proposition of Ethereum 2.0 is unmatched. There is just absolutely no asset in the world that has a 2-3% self-denominated annual returns and just so happens to be rapidly appreciating. When wall-street's greed sees this, it will create the mother of all bubbles.
Don't dismiss the flippening. On February 01 2018 Ethereum reached 70% of Bitcoin's marked cap (it was even closer if you account for the amount of lost bitcoins). That happened before DEFI, before proof of staking was within reach, before multiple effective layer 2 solutions were a thing, before wrapped Bitcoins and before the first signs of mass adoption were on the horizon (like integration with Reddit , VISA and potential to compete with SWIFT). Utility is a huge factor in driving prices, lets not forget how Silk Road played a key role into propelling Bitcoin's value. Yes, Ethereum crashed hard after the peak in 2018, but perhaps it is simply manifesting a higher volatility pattern that is reminiscent of Bitcoin's early years. Bitcoin's first 5 years were characterized by aggressive price swings, why should it be different for Etheruem (considering it is about 5 years younger than Bitcoin)? If the volatility patterns stands on this bull market, we will see a flippening.
So... do I think Etheruem will flip? Yes I do, but I still hold Bitcoin. No one has a crystal ball, and nothing is certain. Perhaps Etheruem will crash and burn, perhaps Bitcoin will become the next Yahoo, and perhaps they will both thrive in this new exciting crypto world.
mechanism design. Share. Tweet. Hashing It Out #60 – Balance Research – Arthur Gervais & Dominik Harz . This episode Corey and Collin bring on the academics Arthur Gervais and Dominik Harz from Imperial College London. They discuss a new protocol that attempts to reduce the required financial deposits for various mechanisms in a way that does not reduce security. In other words, it is an ... The mechanism used for the version 2, 3, and 4 upgrades is commonly called IsSuperMajority() after the function added to Bitcoin Core to manage those soft forking changes. See BIP34 for a full description of this method. As of this writing, a newer method called version bits is being designed to manage future soft forking changes, although it’s not known whether version 4 will be the last ... Satoshi did thoroughly consider the design of the Bitcoin mining mechanism. He was confronted with two options: one allocating Bitcoins to all users according to the number of the nodes, namely one-IP-address-one-vote; the other allocating Bitcoins to the miners according to the computing power, namely one-CPU-one-vote. He chose the latter considering the security of Bitcoin network. He wrote ... Mechanism design has been an inspiration for designing economic incentives for cryptocurrencies like Bitcoin, Ethereum and has led to development of a whole new field called Cryptoeconomics. Mechanism design is a field of economics with a wide range of applications including Blockchain, Internet auctions, and public policy. It is closely linked ... Social choice and mechanism design usually run into the problem in which individuals tend to hide or distort their private information to protect their own interests. Such behaviors are ...
A Mechanism Designer's View of Cryptocurrencies - Matt Weinberg - CES Summit '19
This course looks at the design of Bitcoin and other cryptocurrencies and how they function in practice, focusing on cryptography, game theory, and network architecture. NOTE: Please note that ... The math behind cryptocurrencies. Home page: https://www.3blue1brown.com/ Brought to you by you: http://3b1b.co/btc-thanks And by Protocol Labs: https://prot... A Mechanism Designer's View of Cryptocurrencies - Matt Weinberg - Cryptoeconomic Systems Summit '19. Part of the "Economics & Security of Mining" session. ht... Episode #4: Cryptoeconomics - Game theory / Mechanism design / Network economics - Duration: 52:56. ... Bitcoin, Cryptocurrencies, and Blockchain Part 1 with Preston Pysh - Duration: 45:18. The ... This is the NIPS 2016 spotlight video for the paper "Sample Complexity of Automated Mechanism Design" by Nina Balcan, Tuomas Sandholm, and Ellen Vitercik. Th...