Bitcoin Transaction ...

why do we still have tiny blocks

SV is successfully mining 64MB blocks. how come ABC cant even do 32MB? are we small blockers after all?
submitted by dank_memestorm to btc [link] [comments]

Bitcoin Verde v1.2.1 - 20191115 HF Support

Bitcoin Verde v1.2.1

https://github.com/SoftwareVerde/bitcoin-verde
What's new in v1.2.1:
If you're running a Bitcoin Verde node be sure to upgrade to the latest version before the 15th. And feel free to let me know you're running a node--it would be great to hear. Currently the resources required to run a node are quite large, so we don't expect there to be many until a subsequent release. Additionally, there have been some database changes since the last major release, but migration will be handled during the node's first restart. If you encounter any problems just send me a message, or find me in Telegram: https://t.me/bitcoinverde .
Notable upgrades for this patch is support for the 20191115 HF, which includes Schnorr signature support for multisig transactions.
Bitcoin Verde put a strong emphasis on SLP support this release which includes RPC commands for checking the validity of SLP transactions. SLP Transactions may be checked via the explorer's API or checked directly via RPC. You can refer to the scripts directory, or check the documentation at https://bitcoinverde.org/documentation/#rpc for more details.
What's coming in future releases:
We have added two custom network calls for SPV wallets to query the validity of SLP transactions. These calls are obviously trusted calls, and the network level is not encrypted, so they may be subject to man-in-the-middle attacks without special considerations. We're refining this feature and plan to release it in the next version of Bitcoin Verde.
We are also investing a lot of effort in improving the initial-block-download times of Bitcoin Verde. This feature involves a rather large database restructure, but has shown to improve performance of the IBD and synced validation significantly. Currently Bitcoin Verde stores, validates, and indexes transactions at a rate of ~2k-12k tx/s, depending on hardware, configuration, and mempool synchronization with the network. We expect to double this in a near-future release.
We've also been collaborating with Xavier Kral from bitcoin.com to reduce to disk footprint of Bitcoin Verde via both hardening "trim" mode and changing the way some data is stored within the database without losing existing functionality.
As always, we're proud to be a part of the Bitcoin Cash community, and love collaborating with brilliant people like Mark Lundeburg, Josh Ellithorpe, Amaury Séchet, and many many others.
submitted by FerriestaPatronum to btc [link] [comments]

[For Hire] Crypto-Currency / Blockchain Developer -- Experienced and Professional


Just finished developing out a larger wallet platform, so thought I would throw this out there. Briefly, 38 year old Canadian, 19 years high paced and professional experience within online software, with 8 of those years being bitcoin / crypto-currency. I'm the type who knows the protocol inside out, key generation and management, the cryptography to generate addresses and sign txs, BIP32, multi-sig, the message protocol to communicate directly with nodes on the P2P network, etc.
Well versed with online security, and excellent at non-custodial wallet systems where every user gets their own BIP32 wallet and funds are user segregated. Have done dozens of these operations, everything including traditional exchange, P2P exchange ala Localbitcoins, multi-vendor marketplace, merchant payment gateway, secure web wallets, coin mixers, lottery site, block explorer, jobs board, eCommerce, and the list goes on. Can do anything in your imagination.
You can view details of a base system at: https://envrin.com/secure_wallet
Demo admin panel at: https://demo.envrin.com/admin/
For a better idea of who I am and my skillset, I created Apex at: https://apexpl.io/
And if you want an idea of how in-depth I go into the protocol, I wrote an article at: https://apexpl.github.io/bitcoin_cash_sv_abc_transaction_signatures.html
Speaking of that, can now offer full support for the Bitcoin Cash / SV / ABC protocols, plus any Bitcoin deratives. Don't have ETH just yet, but that can be easily added in if your operation requires it.
If you have the desire to jump into the crypto-currency / blockchain game, and need a comprehensive, secure and scalable solution for your operation, I'm your guy. More than happy to provide you with consultation, and answer any questions you have. Rate is $50/hour, and you may contact me any time via PM here, or e-mail at [email protected].
Thanks, and looking forward to hearing from you, and getting you setup with a quality crypto-currency operation.
submitted by Envrin to forhire [link] [comments]

Fore Hire - Crypto-Currency / Blockchain Developer -- Experienced and Professional

Just finished developing out a larger wallet platform, so thought I would throw this out there. Briefly, 38 year old Canadian, 19 years high paced and professional experience within online software, with 8 of those years being bitcoin / crypto-currency. I'm the type who knows the protocol inside out, key generation and management, the cryptography to generate addresses and sign txs, BIP32, multi-sig, the message protocol to communicate directly with nodes on the P2P network, etc.
Well versed with online security, and excellent at non-custodial wallet systems where every user gets their own BIP32 wallet and funds are user segregated. Have done dozens of these operations, everything including traditional exchange, P2P exchange ala Localbitcoins, multi-vendor marketplace, merchant payment gateway, secure web wallets, coin mixers, lottery site, block explorer, jobs board, eCommerce, and the list goes on. Can do anything in your imagination.
You can view details of a base system at: https://envrin.com/secure_wallet
Demo admin panel at: https://demo.envrin.com/admin/
For a better idea of who I am and my skillset, I created Apex at: https://apexpl.io/
And if you want an idea of how in-depth I go into the protocol, I wrote an article at: https://apexpl.github.io/bitcoin_cash_sv_abc_transaction_signatures.html
Speaking of that, can now offer full support for the Bitcoin Cash / SV / ABC protocols, plus any Bitcoin deratives. Don't have ETH just yet, but that can be easily added in if your operation requires it.
If you have the desire to jump into the crypto-currency / blockchain game, and need a comprehensive, secure and scalable solution for your operation, I'm your guy. More than happy to provide you with consultation, and answer any questions you have. Rate is $50/hour, and you may contact me any time via PM here, or e-mail at [email protected].
Thanks, and looking forward to hearing from you, and getting you setup with a quality crypto-currency operation.
submitted by Envrin to Jobs4Bitcoins [link] [comments]

Bitcoin Verde: A New Consensus Full-Node Implementation for BCH

For the past year I have been working on a full-node in Java, completely from scratch. Today, after so much research, work, communication and testing, I am very happy to release the first beta version of Bitcoin Verde--on the genesis block's 10th birthday, no less!
Bitcoin Verde is a ground-up implementation of the Bitcoin (Cash) (BCH) protocol. This project is a full node, blockchain explorer, and library.
In the past, lack of a diversified development team and node implementation have caused bugs to become a part of the protocol. BCH currently has roughly three common full-node implementations (Bitcoin ABC, Bitcoin XT, Bitcoin Unlimited). However, these implementations are forked versions of Bitcoin Core, which means they may share the same (undiscovered) bugs. With a diverse network of nodes, bugs in the implementation of the protocol will result in incompatible blocks between the nodes, causing a temporary fork. This situation is healthy for the network in the long term, as the temporary forks will resolve over time and the intended ruleset becoming the consensus.
Bitcoin Verde approaches many of the design decisions made by the reference client very differently--most prominently, Bitcoin Verde stores the entire blockchain in its database, not just the UTXOs. Because of this, reorgs are handled very differently and it's even possible to validate multiple forks at the same time. In fact, you can view http://bitcoinverde.org/blockchain/ to view some of the forks our instance has encountered. The node considers the chain matching its consensus rules and having the most PoW to be its "head" chain.
I've spent a lot of time talking with the Bitcoin XT group to attempt to stay in-step with their consensus rules as much as possible, and it my goal to ensure we are diversifying the implementation of the network, NOT separating it. Because of that, please be sure to treat this release as a beta. Currently Bitcoin Verde does not have a mining-pool module finished, but once confidence has been raised about the consistency of the rulesets, this is a feature we intend on implementing and Bitcoin Verde will become a mining full node.
Every component is multithreaded, including networking, validating, mempool acceptance, etc. It is my hope that during the next network stress-test, Bitcoin Verde can help to gather statistics on forks, transactions per second, and block/tx propagation time.
Bitcoin Verde has its drawbacks: it's a resource-hog. Since the whole blockchain is indexed, the disk footprint is about 600GB. Initial-block-download memory usage is configurable, but is about 4 GB, (1.5 GB for the database + 1/2 GB for the node + 1 GB for the tx-bloom filter + 1 GB for the UTXO cache). Another drawback is that Bitcoin Verde "does more stuff"--it is essentially a block explorer, and because of that, the initial block download takes about 2-4 days to index all of chain and its addresses.
Bitcoin Verde has been tested for weeks on Linux (Debian) and OS X. The node has not been tested well on Windows and it may in fact not even sync fully (only a Windows issue, currently). If you're a Windows user and are tech-savvy, feel free to give it a go and report any issues.
I wanted to give my thanks to the Bitcoin XT team for being so welcoming of me. You're a great group of guys, and thanks for the conversations.
Explorer: http://bitcoinverde.org
Source: https://github.com/softwareverde/bitcoin-verde
Documentation: http://bitcoinverde.org/documentation/
Forks: http://bitcoinverde.org/blockchain/
Node Status: http://bitcoinverde.org/status/
submitted by FerriestaPatronum to btc [link] [comments]

Bitcoin Cash Block Sizes Average Less Than 100 KB, Defeating The Point Of Its Creation

Bitcoin Cash Block Sizes Average Less Than 100 KB, Defeating The Point Of Its Creation

https://preview.redd.it/uf07qg21zd721.png?width=690&format=png&auto=webp&s=40673169d928f79d51ec297cfdbb5c19e215e1b0
https://cryptoiq.co/bitcoin-cash-block-sizes-average-less-than-100-kb-defeating-the-point-of-its-creation/
Bitcoin Cash (BCH) forked from the Bitcoin (BTC) blockchain in August 2017, amid a heated block size debate. At the time the Bitcoin network was undergoing congestion due to increased transaction frequency and transaction fees began to exceed $1, going as high as $3 in June 2017.
The Bitcoin Cash community thought that increasing Bitcoin’s block size limit was the best method to increase scalability. Initially, when Bitcoin Cash was created, it had a block size limit of 8 MB, and this was later increased to 32 MB. But Bitcoin Cash (BCH) has a very low rate of adoption, and block sizes currently average less than 100 KB, making the block size increase above Bitcoin’s (BTC) 1 MB totally pointless, defeating the purpose of Bitcoin Cash (BCH).
The block explorer shows how Bitcoin Cash (BCH) cannot even reach 1 MB block sizes, let alone 32 MB. Block sizes of less than 10 KB are common, and there is an occasional block less than 1 KB. Blocks in excess of 100 KB are rare, and there are no blocks today anywhere near 1 MB. Therefore, Bitcoin Cash (BCH) could have a block size of 1 MB and function perfectly. The long term block size chart shows that block sizes have averaged well below 100 KB throughout December 2018.
There are a few instances in 2018 when Bitcoin Cash (BCH) exceeded 1 MB block sizes. In early September average block size briefly hit 1-3 MB, but this was from a “stress test” to prove transaction fees do not increase from increased transactions on the network.
In November, Bitcoin Cash (BCH) split into Bitcoin Cash ABC (now named Bitcoin Cash) and Bitcoin SV. The war between these Bitcoin Cash forks caused spam attacks that increased block sizes to 1-2 MB on average.
On Jan. 15 the average Bitcoin Cash (BCH) block size approached 5 mb, coinciding with the price of Bitcoin Cash (BCH) crashing from $2,700 to $1,500. This is perhaps the 1 case where Bitcoin Cash’s network legitimately had block sizes over 1 Mb, but it was due to people dumping their Bitcoin Cash (BCH) as fast as possible in a panic selling situation.
In summary, since Bitcoin Cash (BCH) has relatively low network activity when compared to Bitcoin (BTC), it seems that there was no point in creating Bitcoin Cash (BCH), since its block sizes are almost always below 100 KB.
Bitcoin (BTC) seems to have resolved its transaction fee problems with Segregated Witness (SegWit), which increases the block size to 1.2 MB on average. This is done by redefining the block size in terms of 1,000 units instead of 1,000 KB, and separating the witness data (signature data) from the Merkle Tree and counting each KB of the witness data as ¼ of a unit.
Also, the Bitcoin Lightning Network is maturing and can handle as much transaction volume as Bitcoin needs without increasing on-chain transactions or block size. In November 2018, the Lightning Network rapidly grew in capacity due to increasing Bitcoin transaction volume and proved that it is a solution which can completely mitigate rises in Bitcoin transaction fees. The fact that Bitcoin (BTC) has become scalable to increased transaction frequency makes the creation of Bitcoin Cash (BCH) even more pointless.
submitted by turtlecane to CryptoCurrency [link] [comments]

An interesting issue with the first ever Bitcoin transaction

Hello, everyone.
I'm working on a personal project which involves indexing certain data from the Bitcoin/Bitcoin Cash blockchain into a database, and once I started my indexing script from the block 0, I was met with an error saying that this transaction (the first Bitcoin transaction) does not exist. My first reaction was to go check the Bitcoin.com's block explorer, but it also reported the transaction as non-existent (no such mempool or blockchain transaction). I then inspected my own Bitcoin ABC node and found that it was also returning the same error (I do have transaction indexing turned on, and I have reindexed the blockchain). However, other explorers such as Blockchair (as in my example) correctly find the transaction. All other transactions can be found without issues by my node.
This is not a major issue in my project, as I can simply index that one transaction "by hand", but I was wondering why this might be happening, and if anybody came across a similar problem.
submitted by Aldin_SXR to btc [link] [comments]

A technical dive into CTOR

Over the last several days I've been looking into detail at numerous aspects of the now infamous CTOR change to that is scheduled for the November hard fork. I'd like to offer a concrete overview of what exactly CTOR is, what the code looks like, how well it works, what the algorithms are, and outlook. If anyone finds the change to be mysterious or unclear, then hopefully this will help them out.
This document is placed into public domain.

What is TTOR? CTOR? AOR?

Currently in Bitcoin Cash, there are many possible ways to order the transactions in a block. There is only a partial ordering requirement in that transactions must be ordered causally -- if a transaction spends an output from another transaction in the same block, then the spending transaction must come after. This is known as the Topological Transaction Ordering Rule (TTOR) since it can be mathematically described as a topological ordering of the graph of transactions held inside the block.
The November 2018 hard fork will change to a Canonical Transaction Ordering Rule (CTOR). This CTOR will enforce that for a given set of transactions in a block, there is only one valid order (hence "canonical"). Any future blocks that deviate from this ordering rule will be deemed invalid. The specific canonical ordering that has been chosen for November is a dictionary ordering (lexicographic) based on the transaction ID. You can see an example of it in this testnet block (explorer here, provided this testnet is still alive). Note that the txids are all in dictionary order, except for the coinbase transaction which always comes first. The precise canonical ordering rule can be described as "coinbase first, then ascending lexicographic order based on txid".
(If you want to have your bitcoin node join this testnet, see the instructions here. Hopefully we can get a public faucet and ElectrumX server running soon, so light wallet users can play with the testnet too.)
Another ordering rule that has been suggested is removing restrictions on ordering (except that the coinbase must come first) -- this is known as the Any Ordering Rule (AOR). There are no serious proposals to switch to AOR but it will be important in the discussions below.

Two changes: removing the old order (TTOR->AOR), and installing a new order (AOR->CTOR)

The proposed November upgrade combines two changes in one step:
  1. Removing the old causal rule: now, a spending transaction can come before the output that it spends from the same block.
  2. Adding a new rule that fixes the ordering of all transactions in the block.
In this document I am going to distinguish these two steps (TTOR->AOR, AOR->CTOR) as I believe it helps to clarify the way different components are affected by the change.

Code changes in Bitcoin ABC

In Bitcoin ABC, several thousand lines of code have been changed from version 0.17.1 to version 0.18.1 (the current version at time of writing). The differences can be viewed here, on github. The vast majority of these changes appear to be various refactorings, code style changes, and so on. The relevant bits of code that deal with the November hard fork activation can be found by searching for "MagneticAnomaly"; the variable magneticanomalyactivationtime sets the time at which the new rules will activate.
The main changes relating to transaction ordering are found in the file src/validation.cpp:
There are other changes as well:

Algorithms

Serial block processing (one thread)

One of the most important steps in validating blocks is updating the unspent transaction outputs (UTXO) set. It is during this process that double spends are detected and invalidated.
The standard way to process a block in bitcoin is to loop through transactions one-by-one, removing spent outputs and then adding new outputs. This straightforward approach requires exact topological order and fails otherwise (therefore it automatically verifies TTOR). In pseudocode:
for tx in transactions: remove_utxos(tx.inputs) add_utxos(tx.outputs) 
Note that modern implementations do not apply these changes immediately, rather, the adds/removes are saved into a commit. After validation is completed, the commit is applied to the UTXO database in batch.
By breaking this into two loops, it becomes possible to update the UTXO set in a way that doesn't care about ordering. This is known as the outputs-then-inputs (OTI) algorithm.
for tx in transactions: add_utxos(tx.outputs) for tx in transactions: remove_utxos(tx.inputs) 
Benchmarks by Jonathan Toomim with Bitcoin ABC, and by myself with ElectrumX, show that the performance penalty of OTI's two loops (as opposed to the one loop version) is negligible.

Concurrent block processing

The UTXO updates actually form a significant fraction of the time needed for block processing. It would be helpful if they could be parallelized.
There are some concurrent algorithms for block validation that require quasi-topological order to function correctly. For example, multiple workers could process the standard loop shown above, starting at the beginning. A worker temporarily pauses if the utxo does not exist yet, since it's possible that another worker will soon create that utxo.
There are issues with such order-sensitive concurrent block processing algorithms:
In contrast, the OTI algorithm's loops are fully parallelizable: the worker threads can operate in an independent manner and touch transactions in any order. Until recently, OTI was thought to be unable to verify TTOR, so one reason to remove TTOR was that it would allow changing to parallel OTI. It turns out however that this is not true: Jonathan Toomim has shown that TTOR enforcement is easily added by recording new UTXOs' indices within-block, and then comparing indices during the remove phase.
In any case, it appears to me that any concurrent validation algorithm would need such additional code to verify that TTOR is being exactly respected; thus for concurrent validation TTOR is a hindrance at best.

Advanced parallel techniques

With Bitcoin Cash blocks scaling to large sizes, it may one day be necessary to scale onto advanced server architectures involving sharding. A lot of discussion has been made over this possibility, but really it is too early to start optimizing for sharding. I would note that at this scale, TTOR is not going to be helpful, and CTOR may or may not lead to performance optimizations.

Block propagation (graphene)

A major bottleneck that exists in Bitcoin Cash today is block propagation. During the stress test, it was noticed that the largest blocks (~20 MB) could take minutes to propagate across the network. This is a serious concern since propagation delays mean increased orphan rates, which in turn complicate the economics and incentives of mining.
'Graphene' is a set reconciliation technique using bloom filters and invertible bloom lookup tables. It drastically reduces the amount of bandwidth required to communicate a block. Unfortunately, the core graphene mechanism does not provide ordering information, and so if many orderings are possible then ordering information needs to be appended. For large blocks, this ordering information makes up the majority of the graphene message.
To reduce the size of ordering information while keeping TTOR, miners could optionally decide to order their transactions in a canonical ordering (Gavin's order, for example) and the graphene protocol could be hard coded so that this kind of special order is transmitted in one byte. This would add a significant technical burden on mining software (to create blocks in such a specific unusual order) as well as graphene (which must detect this order, and be able to reconstruct it). It is not clear to me whether it would be possible to efficiently parallelize sorting algortithms that reconstruct these orderings.
The adoption of CTOR gives an easy solution to all this: there is only one ordering, so no extra ordering information needs to be appended. The ordering is recovered with a comparison sort, which parallelizes better than a topological sort. This should simplify the graphene codebase and it removes the need to start considering supporting various optional ordering encodings.

Reversibility and technical debt

Can the change to CTOR be undone at a later time? Yes and no.
For block validators / block explorers that look over historical blocks, the removal of TTOR will permanently rule out usage of the standard serial processing algorithm. This is not really a problem (aside from the one-time annoyance), since OTI appears to be just as efficient in serial, and it parallelizes well.
For anything that deals with new blocks (like graphene, network protocol, block builders for mining, new block validation), it is not a problem to change the ordering at a later date (to AOR / TTOR or back to CTOR again, or something else). These changes would add no long term technical debt, since they only involve new blocks. For past-block validation it can be retroactively declared that old blocks (older than a few months) have no ordering requirement.

Summary and outlook

Taking a broader view, graphene is not the magic bullet for network propagation. Even with the CTOR-improved graphene, we might not see vastly better performance right away. There is also work needed in the network layer to simply move the messages faster between nodes. In the last stress test, we also saw limitations on mempool performance (tx acceptance and relaying). I hope both of these fronts see optimizations before the next stress test, so that a fresh set of bottlenecks can be revealed.
submitted by markblundeberg to btc [link] [comments]

EOS - Getting Started & Helpful Links

WELCOME TO eos!

Table of Contents

  1. What is EOS?
  2. Why is EOS Different?
  3. Get Started
    1. WHAT IS AN EOS ACCOUNT?
      1. GET FREE EOS ACCOUNTS
      2. WHAT IS REX AND HOW TO USE IT FOR RESOURCES
      3. DECENTRALIZED FINANCE (DEFI) ON EOS
  4. Channels, dApps, Block Explorer and more
    1. Governance and Security
    2. Wallets
    3. DApps
    4. Popular dApps
    5. Block Explorers
      1. REX User Interfaces
  5. Channels
  6. FAQ

What is EOS?

EOS is a community-driven distributed blockchain, that allows the development and execution of industrial-scale decentralized applications (dApps). Therefore, EOS intention is to become a blockchain dApp platform that can securely and smoothly scale to thousands of transactions per second, all while providing an accessible experience to app developers, entrepreneurs, and users. They aim to provide a complete operating system for decentralized applications by providing services like user authentication, cloud storage, and server hosting.
The EOS.IO open-source code was developed and it is currently updated by Block.One. Block.One is based in the Cayman Island and it is managed by Brendan Blumer (CEO), Daniel Larimer (CTO) and Andrew Bliss (CFO).
Links:
Video:

How is EOS different?

EOS is the first Blockchain that focuses on building a dApps platform by using the delegated proof-of-stake consensus mechanism. With dPoS, EOS manage to provide a public blockchain with some particular features, such as Scalability, Flexibility, Usability and Governance.
Further Reading:

Get Started:

WHAT IS AN EOS ACCOUNT?

From EOS Beginners: Anatomy of an EOS Account
An EOS account is a readable name that is stored on the EOS blockchain and connected to your “keys”. An EOS account is required for performing actions on the EOS platform, such as sending/receiving tokens, voting, and staking.
Each account is linked to a public key, and this public key is in turn linked to a private key. A private key can be used to generate an associated public key, but not vice versa. (A private key and its associated public key make up a key pair)
These keys ensure that only you can access and perform actions with your account. Your public key is visible by everyone using the network. Your private key, however, will never be shown. You must store your private keys in a safe location as they should not be shared with anyone (unless you want your EOS to be stolen)!
TLDR: EOS Accounts are controlled by key pairs and store EOS tokens in the Blockchain. Wallets store key pairs that are used to sign transactions.

GET FREE EOS ACCOUNTS

From EOS Onboarding: Free Accounts
Unlike other chains in the space, EOSIO accounts do not typically charge a transfer fee for sending tokens or providing actions on the blockchain. Where Bitcoin and Ethereum mine blocks and charge a fee, EOS provides feeless transactions to users based on CPU, NET, and RAM resources.
Although traditionally those wanting to create EOS accounts in particular have needed to ‘pay a fee’ to get into the system, in reality this fee is nothing more than a basic stake of CPU and NET resources. In theory this provides free transactions on the network, the number of transactions that any user gets in a 24 hour window is determined by the amount of stake especially in CPU that any given account maintains.
This guide then provides a brief overview of the account creation process of some of those account types that allow for easy no friction EOS mainnet onboarding and in most cases, the provision of more than enough resources to be able to utilize the network without having to go through the process of buying, transferring, and staking or renting resources ensuring your account remains operational.

WHAT IS REX AND HOW TO USE IT FOR RESOURCES

What is REX?
REX (Resource Exchange) is a resource market that can meet the demand, where the EOS token holders can lease out tokens in return for “rent”, and the Dapps can lease resources they need with less cost.
For EOS holders:Earn an income via put your spare EOS tokens in REX instead of just keep it on your EOS account.
For EOS Dapps:Lease as much resources as you need at a decent price instead of stake EOS for resources in 1:1 ratio.
Source: TokenPocket
Through REX you can pay a small amount of EOS to receive a much larger amount in CPU or NET for a whole month. Today (August 20, 2020), paying 1 EOS on REX guarantees you 7,500 EOS in CPU for 30 days.
You can easily use REX via Anchor Wallet, importing your EOS Account and with a few simple clicks. Learn how to use REX with Anchor

DECENTRALIZED FINANCE (DEFI) ON EOS

Decentralized Finance (DeFi) is the combination of traditional financial instruments with decentralized blockchain technology. Currently DeFi is the fastest growing sector in blockchain, which allows greater inclusion within the financial system even for those who previously could not participate in the global economy. Indeed, to use DeFi products is enough just a smartphone, and the so-called "unbanked", ca now participate without any restrictions.
DeFi Projects on EOS
  • VIGOR - VIGOR protocol is a borrow, lend, and save community
  • Defibox - One-stop EOS DeFi Platform
  • Equilibrium (EOSDT) - Equilibrium is an all-in-one Interoperable DeFi hub
  • [PredIQt](prediqt.everipedia.org) - PredIQt is a prediction market & oracle protocol
  • PIZZA - PIZZA is an EOS based decentralized stablecoin system and financial platform
  • Chintai - high performance, fee-less community owned token leasing platform

Channels, dApps, Block Explorer and more

Governance and Security:

Wallets:

DApps:

Popular dApps:

  • NewDex - Decentralized Exchange.
  • Prospectors -MMO Game with Real Time Economic Strategies
  • Everipedia - Wiki-based online encyclopedia
  • Upland - Property trading game with real-world addresses
  • Crypto Dynasty - Play-to-Earn with Crypto Dynasty

Block explorers:

Guides to vote:

REX User Interface:

Channels:

Official:
Community:
Telegram:
Telegram Non-English General:
Developers:
Testnets:

FAQ:

submitted by eosgo to eos [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:
https://youtu.be/tPImTXFb_U8

BEGIN TRANSCRIPT:

Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

The BCH blockchain is 165GB! How good can we compress it? I had a closer look

Someone posted their results for compressing the blockchain in the telegram group, this is what they were able to do:
Note, bitcoin by its nature is poorly compressible, as it contains a lot of incompressible data, such as public keys, addresses, and signatures. However, there's also a lot of redundant information in there, e.g. the transaction version, and it's usually the same opcodes, locktime, sequence number etc. over and over again.
I was curious and thought, how much could we actually compress the blockchain? This is actually very relevant: As I established in my previous post about the costs of a 1GB full node, the storage and bandwidth costs seem to be one of the biggest bottlenecks, and that CPU computation costs are actually the cheapest part, as were able almost to get away with ten year old CPUs.
Let's have a quick look at the transaction format and see what we can do. I'll have a TL;DR at the end if you don't care about how I came up with those numbers.
Before we just in, don't forget that I'll be streaming today again building a SPV node, as I've already posted about here. Last time we made some big progress, I think! Check it out here https://dlive.tv/TobiOnTheRoad. It'll start at around 15:00 UTC!

Version (32 bits)

There's currently two transaction types. Unless we add new ones, we can compress it to 1 bit (0 = version 1; and 1 = version 2).

Input/output count (8 to 72 bits)

This is the number of inputs the transaction has (see section 9 of the whitepaper). If the number of inputs is below 253, it will take 1 byte, and otherwise 2 to 8 bytes. This nice chart shows that, currently, 90% of Bitcoin transactions only have 2 inputs, sometimes 3.
A byte can represent 256 different numbers. Having this as the lowest granularity for input count seems quite wasteful! Also, 0 inputs is never allowed in Bitcoin Cash. If we represent one input with 00₂, two inputs with 01₂, three inputs with 10₂ and everything else with 11₂ + current format, we get away with only 2 bits more than 90% of the time.
Outputs are slightly higher, 3 or less 90% of the time, but the same encoding works fine.

Input (>320 bits)

There can be multiple of those. It has the following format:

Output (≥72 bits)

There can be multiple of those. They have the following format:

Lock time (32 bits)

This is FF FF FF FF most of the time and only occasionally transactions will be time-locked, and only change the meaning if a sequence number for an input is not FF FF FF FF. We can do the same trick as with the sequence number, such that most of the time, this will be just 1 bit.

Total

So, in summary, we have:
Nice table:
No. of inputs No. of outputs Uncompressed size Compressed size Ratio
1 1 191 bytes (1528 bits) 128 bytes (1023 bits) 67.0%
1 2 226 bytes (1808 bits) 151 bytes (1202 bits) 66.5%
2 1 339 bytes (2712 bits) 233 bytes (1861 bits) 68.6%
2 2 374 bytes (2992 bits) 255 bytes (2040 bits) 68.2%
2 3 408 bytes (3264 bits) 278 bytes (2219 bits) 68.0%
3 2 520 bytes (4160 bits) 360 bytes (2878 bits) 69.2%
3 3 553 bytes (4424 bits) 383 bytes (3057 bits) 69.1%
Interestingly, if we take a compression of 69%, if we were to compress the 165 GB blockchain, we'd get 113.8GB. Which is (almost) exactly the amount which 7zip was able to give us given ultra compression!
I think there's not a lot we can do to compress the transaction further, even if we only transmit public keys, signatures and addresses, we'd at minimum have 930 bits, which would still only be at 61% compression ratio (and missing outpoint and value). 7zip is probably also able to utilize re-using of addresses/public keys if someone sends to/from the same address multiple times, which we haven't explored here; but it's generally discouraged to send to the same address multiple times anyway so I didn't explore that. We'd still have signatures clocking in at 512 bits.
Note that the compression scheme I outlined here operates on a per transaction or per block basis (if we compress transacted satoshis per block), unlike 7zip, which compresses per blockchain.
I hope this was an interesting read. I expected the compression ratio to be higher, but still, if it takes 3 weeks to sync uncompressed, it'll take just 2 weeks compressed. Which can mean a lot for a business, actually.

I'll be streaming again today!

As I've already posted about here, I will stream about building an SPV node in Python again. It'll start at 15:00 UTC. Last time we made some big progress, I think! We were able to connect to my Bitcoin ABC node and send/receive our first version message. I'll do a nice recap of what we've done in that time, as there haven't been many present last time. And then we'll receive our first headers and then transactions! Check it out here: https://dlive.tv/TobiOnTheRoad.
submitted by eyeofpython to btc [link] [comments]

Weekly Update: Mycro on ParJar, PAR on MetaMorphPro, new customer for Resolvr, 1UP on IDEX... – 19 Jul - 25 Jul'19

Weekly Update: Mycro on ParJar, PAR on MetaMorphPro, new customer for Resolvr, 1UP on IDEX... – 19 Jul - 25 Jul'19
Heya everyone, looks like we are in for another round of rapid catch ups on the weekly updates. Haha. Here's another exciting week at Parachute + partners (19 Jul - 25 Jul'19):

In honour of our latest partnership with Silent Notary, this week we had an SNTR Parena. Richi won the finale to take home a cool share from the 1.5M SNTR pot. The weekly Parena had a 100k PAR pot. McPrine took home the lion’s share by beating Ken in a closely fought finale. In 8 months since ParJar started, we are now at 12k users, 190k transactions and 200+ communities. Cap says: “…to put it into perspective - June 18th we were around 100k transactions and 9 k users. A month later we’ve added 3k new users (33% growth) and 80,000 new transactions”. Freaking amazing! And thank you for the shoutout aXpire! MYO (Mycro) was added to ParJar this week. And their community started experiencing the joys of tipping.
Lolarious work by @k16v5q5!
Last week MetaMorphPro did a Twitter vote to list new projects. Turns out Parachuters did PAR a solid. Woot woot! The first ever official TTR shirt is already live in the Parachute shop. Alexis announced the start of a shirt design contest to add to the TTR shirt inventory. Ian’s art quiz in TTR this week saw 25k PAR being given away to winners. Victor’s quiz had another 25k PAR pot for the winners. And Unique’s Math quiz in TTR was a 50k PAR extravaganza. All in all, 100k PAR won in quizzes in TTR this week. Sweet! Cryptonoob (Tom) set up a survey this week for “..for people who are interested in Crypto but don't know where to start..” for his work on the Parachute app UX. We all know how much Gian loves the reality show Big Brother. So we saw a new take on his Tuesday fun events. Mention your favourite reality show and what it’s all about to get some cool PAR. Yay!
A PAR coaster makes its way from design to final product in @k16v5q5’s workshop
Chris’ Golf tourney contest resulted in no winners since there were no correct guesses. So he decided to give out fun prizes instead: like Jason for coming last, Win for a “hilariously bad guess” of 100 strokes for the champions total score etc. Haha. However, there were a few top prize winners as well. LordHades, with a tournament score of 1968, took home 50k PAR as grand prize. Neat! Ali, Hang, Clinton and Tony came in close at 2nd to 5th positions. Congrats! And with that, Chris announced the start of another contest: Premier League Challenge for Parachuters (Entry code: x0zj2d) with an entry fee of 5000 PAR each. Prize pool yet to be announced. Jason is still in the lead this week in the Big Chili Race at 47 cm. Not much change either in the other plants. Slow week at Chili land.
Ric getting in on that sweet Parachute merch
Last week we shared that AXPR got listed on Binance Dex. The ERC20-BEP2 conversion bridge went live this week. Learn how to convert your ERC20 tokens to the BEP2 variant from the available how-to guides (article/video/gif). To mark the occasion, aXpire gave away a ton of BNB in an easter egg contest plus a 1% AXPR deposit bonus to folks who started using the bridge. Remember, we had mentioned that the reason for the weekly double burn of AXPR will be revealed this week? Well here it is. Resolvr onboarded a new client: HealthGates. More fees, more burn. Read more about it here. Woot! Victor hosted a trivia like every week on Friday at aXpire for 1000 AXPR. 10 questions. 100 AXPR each. Nice! Catch up on the week that was at aXpire from their latest video update. 2gether was selected as one of the top 100 most innovative projects by South Summit this week. Cryzen now built a Discord-Telegram chat bridge so that anything posted in either platform gets cross posted on the other. The latest WandX update covers the dev work that’s been going on for the past few weeks – support for Tezos wallet, staking live for Tezos, Livepeer and Loom etc.
2gether on South Summit’s honour roll
BOMB community member rouse wrote a quick script on how to identify and avoid common crypto scams. Have a read. As BOMB says, “Stay vigilant and always verify”. Last week's giveaway for the top lessons shared by entrepreneurs had so many good entries that the final list was expanded to 19 winners. Awesome stuff! Zach’s latest article on the difference between BOMB and BOMBX explores both the basic and the more complex distinctions. Switcheo’s introductory piece on hyperdeflationary tokens also talks at length about the BOMB project. Zach also announced the start of the Telegram Takeover Challenge this week – get new communities to experience ParJar and BOMB and earn some cool BOMB tokens in return. Win win! In preparation for the integration of the SMS feature in the Birdchain app, the team released an article on some key statistics. Here’s a video from Birdchain CEO Joao Martins discussing the feature. The latest Bounty0x distribution report can be found here. Also, check out a shoutout to the platform in this NodesOfValue article on bounty hunting opportunities.
Start of beta testing for SMS feature in Birdchain
The ETHOS Universal Wallet now supports Bitcoin Cash and Typerium. Following ETHOS’ listing on Voyager, it will also become the native token on Voyager. Switch continued its PR campaign with cover pieces on Yahoo, CCN and DDFX this week. Altcoin Buzz has a section on its site named “Community Speaks” where members of a crypto community share updates on a project they support. This week, Fantom was featured in this section. V-ID is the latest project using Fantom’s ERC20-BEP2 bridge for listing on Binance Dex. Big props to FTM for opening it up to other projects. FTM got listed on Probit and Airswap. FTM can also now be used as collateral for borrowing on the Constant platform. The Fantom Foundation joined the Australian Digital Commerce Association which works on regulatory advocacy in blockchain. This was also a perfect setting for the Fantom Innovation Labs team to attend the APAC Blockchain Conference in Sydney. Here’s a report. In this week’s techno-literature, have a read of the various Fantom mainnets and the TxFlow protocol by clicking here and here respectively.
Another proposed token utility of ETHOS
Uptrennd’s 1UP token was listed on IDEX this week. To put it simply, the growth at Uptrennd Twitter has been explosive. Check out these numbers. Awesome stats! This free speech vs fair pay chart shared by Jeff explains why the community backs the platform. About 96% of 1UP issued this week has been used to level up on Uptrennd. Want a recap of the latest at Uptrennd? Click here. Crypto influencer Didi Taihuttu and his family (The Bitcoin Family) joined the platform this week. Congrats once again to Horizon State for making it to the finals of The Wellington Gold Awards. Some great networking opportunities and exposure right there. If you have been lagging behind on HST news, the latest community update covers the past month. We had also mentioned last week that Horizon State is conducting a vote for The Opportunities Party in New Zealand. Here’s a media report on it. Catch up on the latest at District0xverse from their Weekly and Dev updates. The Meme Factory bot was introduced this week to track new memes and marketplace trends on Meme Factory. The HYDRO article contest started last week was extended to the 27th. 50k HYDRO in prizes to be won. Noice! Hydrogen got nominated as a Finalist to the 2019 FinXTech Awards. HYDRO was also listed on the HubrisOne wallet this week. And finally, here’s a closer look at the Hydro Labs team. The folks who make the magic happen. Sup guys!
The Parachute Big Chili Race Update – Jason at 1st, Sebastian at 3rd
And with that, we close for this week at Parachute and partners. See you again with another weekly update soon.
submitted by abhijoysarkar to ParachuteToken [link] [comments]

Looking for BCH testnet API

Hi everyone,

I'm trying to find a publicly accessible Bitcoin Cash testnet API that doesn't require an account or at the very least doesn't need my private keys, and supports both address balance lookups and posting of raw transactions. I'm using Bitcoin ABC to support this functionality locally but I also need to be able to do this through a remote RPC API (preferably JSON-based).

Here's a quick rundown of where I've looked so far:

https://rest.bitcoin.com/
This seems like the most obvious option and supports the functionality I'm looking for but it appears that it's only for livenet. For example, here's a testnet address with a little over 0.5 tBCH:
https://explorer.bitcoin.com/tbch/address/n22LEtuzgniSXeMBoUjgpqkDm6Nf7SEMi4
...the API call, however, doesn't like this address:
https://rest.bitcoin.com/v2/address/details/bchtest:qrs0pew4sa7qtdz36jwc5rlwlmsfrlhqxuawvcfxsl
{"error":"Invalid network. Trying to use a testnet address on mainnet, or vice versa."}
Maybe there's another API endpoint I should be using here? Unfortunately I'm not seeing it in the documentation.

https://blockdozer.com/

The API documentation doesn't appear to include posting of transactions, raw or otherwise, and the Broadcast Raw Transaction page appears to be for Bitcoin only.

https://dev.btc.com/docs/js

This API appears to require the BlockTrail SDK and the "Sending Transactions" section seems to indicate that it uses a custodial/shared account model: "The SDK handles all the logic of creating the transaction, signing it with your primary key and sending it to the API so BTC.com can co-sign the transaction and send it to the bitcoin network."
Although I understand that it's not the same thing, to me this isn't too far from requiring my private keys (i.e. it's a loss of signing control).

https://www.bitgo.com/api/v2/

This API supports tBCH but uses hosted wallets (i.e. requires an account and private key[s]).

https://blockchair.com/

The API documentation has a section on broadcasting transactions but only lists bitcoin, bitcoin-cash, ethereum, and litecoin as available chains.

https://docs.cryptoapis.io/

Requires an account (API key) but appears to support the tBCH functionality I'm looking for. It's on the back-burner if I can't find anything else.

https://www.blockchain.com/api/blockchain_wallet_api

It looks like this API requires the developer to create and host a wallet with them in order to send transactions. I'm not sure if I'd have any control over the private key(s) here.

https://bitpay.com/api

Appears to require an account and doesn't appear to fully support the functionality I'm looking for.

If I've missed anything or I'm mistaken about what I've looked into I would very much appreciate your feedback! More importantly, if you know of a service that I haven't listed and that can do what I need it to do I thank you in advance for sharing it.
submitted by monican_agent to Bitcoincash [link] [comments]

What happens to transactions in the mempool, exactly?

Every node can make up rules for mempools.
When I submit a transaction, I can attach a fee from 0 to however much I can afford. Obviously miners want to process as many transactions as they can to make money. A transaction with 0 fee makes them work for nothing so I would imagine most mempools are configured to ignore or drop transactions with 0 fee. Is this true?

Does Bitcoin ABC and Bitcoin Unlimited have different or similar rules for mempools? What are they?

Is there a good block explorer / transaction tracker where I can sort all recent (last week or so) transactions by fee and figure out what the lowest fee being processed is?

Are there any good sites out there that keep track of transactions when they first enter the mempool and how long they take to clear?

And let's say I create a transaction with 0 fee and submit it. Assuming nobody processes it... if I delete my wallet.dat and then reload Bitcoin unlimited/abc will I stop broadcasting it? And from that point how long before the transaction vanishes from all mempools?

Do most mempools refuse to even listen to 0 fee transactions? If I offer just a single satoshi will they hear it but refuse to process it for a while? Are there some miners out there that process even 0 fee transactions still?

Just curious. If I wanted to clean up inputs or play around with something that isn't worth 8 cents a transaction could I do it on BCH if I don't mind waiting or should I be playing with Dogecoin or something instead?
submitted by Aro2220 to btc [link] [comments]

How to split coins from a paper wallet

I just split my coins from a BCH balance on a paper wallet. Here's how I did it. The method is straight forward: taint the paper wallet with some BCH/ABC coin then sweep the paper wallet into a BCH/ABC wallet followed by a sweep of the paper wallet into an SV wallet.
You will need a wallet that supports the BCH/ABC chain and one that supports the SV chain. The wallets need to have a function to sweep funds from a paper wallet. I used Coinomi which just released an update supporting the SV chain. Create you wallets if necessary so they are ready to receive coins from the paper wallet.
First, go to the forkfaucet.cash site and submit your paper wallet address to the site. When you do this your paper wallet will receive a small amount of BCH/ABC only coins. Go to a block explorer for each chain (blockchair.com for BCH/ABC and bsvexplorer.info for SV). Enter your paper wallet address, and verify that the taint coins are being sent to the paper wallet on BCH/ABC and not on SV. Wait for the taint transaction to confirm on the BCH/ABC chain just to be safe. Edit 3: Blockchair now has support for the SV chain, so checking the taint coin movement can all be done there.
Once the taint transaction has confirmed, go to your BCH/ABC wallet and sweep the coins from the paper wallet. You will need to supply the private key from your paper wallet. This should move all the coins, including the tainted coins to the BCH/ABC wallet. The coins won't move on the SV chain since the taint coins are not valid on the SV chain. Again, wait for the transaction moving the coins to the BCH/ABC wallet to confirm before continuing.
Once the coins are safely confirmed in the BCH/ABC wallet, sweep the coins from the paper wallet into the SV wallet. Now you have your coins split in the two wallets.
Edit: It seems the forkfaucet.cash site is not reliably accessible. I've seen free.bitcoin.com as an alternate elsewhere. Any source of BCH/ABC only coins can be used to taint the paper wallet.
Some options:
Buy some on an exchange and withdraw to the paper wallet. Have a friend with BCH/ABC only coins send a bit to your paper wallet. Do you have some BCH/ABC coin in another wallet that were already split away? You get the idea.
Edit 2: u/ftrader is offering BSV for tainting so that coins can be split. NOTE: if you use BSV to taint your paper wallet, then you need to sweep to the BSV wallet first, then sweep to the BCH/ABC wallet.
submitted by deeb33 to btc [link] [comments]

Sent BCH from Coinbase to Kraken. How do I get my BSV?

Kraken sent an email on Nov 25th stating the following:
The recent Bitcoin Cash hard fork resulted in two (for now) viable chains:
  1. Bitcoin Cash (following the Bitcoin Cash ABC protocol and roadmap published by bitcoincash.org)
  2. Bitcoin SV (following the Bitcoin Cash SV protocol and roadmap published by nChain)
As previously announced, tokens of the Bitcoin Cash ABC protocol are listed on Kraken as Bitcoin Cash (BCH). However, Kraken now also support tokens of the Bitcoin Cash SV protocol under the designation Bitcoin SV (BSV).
We took the BCH balance you had at the time of the fork and added an equal amount of BSV to your balance (so if you had 10 BCH at this time, you now have 10 BCH and 10 BSV).
Deposits and withdrawals are presently enabled for both chains. Confirmation requirements are 15 for BCH and 30 for BSV. Sending a non-replay protected BCH or BSV transaction to a Kraken deposit address will result in a credit of BCH and BSV if the transaction transferred both. This means that if your BCH wallet doesn’t support coin splitting, you can send your BCH to Kraken and we’ll credit you both the BCH and the BSV.
---------
Im a rookie when it comes to crypto terminology, but it seemed simple enough. Send BCH to Kraken, they'll credit your BSV. That didn't happen, and here's their response:
I have gone ahead and checked the transaction ID for the second deposit on a BSV block explorer and can see it has not been broadcast on the BSV blockchain. This means your wallet either did not broadcast it to the BSV chain or that the transaction wasn't repayable. If it is re-playable you will need to find a tool to broadcast the transaction on, or alternatively import your keys into a BSV wallet and send the tokens manually.
Since we aren't affiliated with any bsv wallet clients or broadcasting tools we cannot make any suggestions or help in this process.
So have I lost that BSV because the Coinbase transaction wasn't replayable? Or do I just need to split offline via ElectronCash or something else?
Thanks for any input!

submitted by bloat_toad to btc [link] [comments]

February report on ConsenSys spoke developments

We've got a full report on what the spokes at ConsenSys have been up to. Check out more here.
EDIT: Fixed broken link (Infura - Investing in the Decentralization of Ethereum” - Thanks u/shazow!)

Alethio

A comprehensive suite of blockchain exploration, analysis, and forecasting products for the Ethereum network.

Allinfra

All infrastructure, for all — platform for the tokenization of large scale unlisted infrastructure.

Bounties Network

Freelance task fulfillment, paying out in any Ethereum token upon successful completion.

Decrypt Media

A daily news site covering all things crypto and the advent of the decentralized web.

Endjinn

Simulate your key token mechanisms to get on the awesome future usage timeline.

Fathom

A decentralized peer assessment protocol forming the foundation of a universal academic system.

Gitcoin

The easiest way to leverage the open source community to incentivize or monetize work.

Grid+

Leverages the public Ethereum blockchain to give consumers direct access to wholesale energy markets.

Helena

A decentralized platform for curated fundamental token research and analysis.

Infura

A scalable, standards-based, globally distributed cluster and API endpoint for Ethereum, IPFS, and other infrastructures.

Kaleido

An all-in-one enterprise SaaS platform that radically simplifies the creation and operation of secure blockchain networks and accelerates the journey from PoC to Production.

Kauri

The Ethereum community’s technical knowledge network.

Liquality

Swap cryptocurrencies without middlemen.

LitePaper

A simple knowledge base for the crypto-verse.

Meridio

A blockchain platform for creating, managing, and transferring fractional real estate ownership.

MetaMask

MetaMask is a browser extension that allows you to run Ethereum dApps right in your browser without running a full Ethereum node.

Nethereum

A .NET integration library for Ethereum allowing users to interact with Ethereum clients like Geth or Parity using RPC.

OpenLaw

A blockchain-based protocol for the creation and execution of legal agreements in a user-friendly, compliant way.

PegaSys

A protocol engineering team building Ethereum tech for the public chain community and leading enterprises.

Rhombus

Securely connects smart contracts with accurate, computable real-world data.

Truffle

A development environment, testing framework, and asset pipeline for Ethereum-based smart contracts and dapps.

TruSet

Building multi-sided marketplaces to collect, validate, publish, and commercialize business-critical reference data.

uPort

A self-sovereign identity management platform that allows users to register their own identity on Ethereum, send and request credentials, sign transactions, and securely manage keys and data.
submitted by ConsenSys_Socialite to ethereum [link] [comments]

BCH Refresher (Update)

The points above summarise the content of the following Bitcoins.com article: https://news.bitcoin.com/bitcoin-cash-network-status-transactions-on-the-rise/
submitted by RaderVader to Bitcoincash [link] [comments]

Coin splitting from a paper wallet using Bitcoin.com app

I moved a small amount of BCH to a paper wallet prior to the fork. Yesterday, I swept the wallet to my bitcoin.com mobile app, and sent the entire balance to my Kraken account (BSV deposit address). I was expecting my Kraken account to be credited with both BCH and BSV, however, only BCH was credited. Transaction on the ABC chain is here.
The BSV explorer indicates my BSV is still sitting in the paper wallet address here.
When I imported the paper wallet's private key into electron cash, it shows an unconfirmed send transaction on the BSV chain. It says it should be confirmed in 25 blocks. The transaction was sent in block 558215 (ABC chain), so I assumed that this would show up at the same block height on the SV chain. SV chain is currently at block 558240, and still no sign of this transaction.
What is happening? Do I just need to wait longer? Why didn't this transaction broadcast to the SV chain in block 558215?

I did a similar transaction from my Copay wallet to Coinex, and was instantly credited with both SV and ABC. Was the difference here because the funds were already in my Copay wallet, as opposed to being swept from a paper wallet?
submitted by btcbchanon to btc [link] [comments]

Transaction confirmed but can not find transaction-ID?

UPDATE: I completely reinstalled ABC and downloaded the entire blockchain again. Then imported a backup from my wallet (from before the transaction) and everything seems to be ok. And I still have my BCH :) Thanks all!
Hi there,
I made a transaction with BitcoinABC, and it has been confirmed 21 times by now. However, when I check the various online block explorers they can not find the transaction-ID. And when I lookup the receiving address it still has 0 balance. Any idea what is going on here? I figure that 2 hours after making the transaction it should be visible/received by now?
Edit:
TransactionID: x
Receiving address: x
submitted by JeMoede to btc [link] [comments]

Latest block updates

They're coming in fast! Seems light a fight between ViaBTC and the anonymous mineminer pool ("MCPool") so far. The ticker name seems to be BCH - BitCoinCash. I'm updating this thread both on /BTC and /BitCoin.
Reply below if you want me to add more stats to each block!

Block difficulty

The more miners there are on the network with more power collectively behind them in pools, the harder blocks have to be to solve, so difficulty increases over time. BitCoin has a built in mechanism where difficulty will drop if no blocks are mined in a certain (long) amount of time. This BCC fork has been modified so that if less than 6 blocks have been mined in the past 12 hours, difficulty drops (or if hash rates drop to 1/12th of their former rate - toinewx).
This means at the current rate, we will not see a difficulty drop. This could be good or bad depending on the groups mining them motives are - do they want to lock everyone else out difficulty wise as they are pushing nowhere near as much power or are they enthusiasts supporting the new chain? Time will tell.

Links

I have compiled here various useful links, resources and threads to do with the split and BCH.
Reddit links are np (non participation) but should you go over to the other sub (/btc /bitcoin or vice versa) please be civil. There is no need for the war that is seemingly going on. Keep your cool!

Stats

  • 25% first mined by ViaBTC as of block 12
  • 75% first mined by "Other" (MCPool/Solo Miner?) as of block 12
  • Largest (and first block): 6985 transactions, 1915175B size (1.82MB, over 1MB almost 2MB!!) as of block 12
  • First block was mined 5 hours 52 min 41 sec after fork at 7:12:41 PM (1st Aug 2017. It contained 6985 transactions, and was 1915175B in size (1.82MB, over 1MB). It was mined by ViaBTC
  • Second block took 24 min 38 sec after first block to mine by ViaBTC
  • Quickest block after last: Block 8, + 3 min 37 sec after block 7 as of block 12
  • First ViaBTC block was mined on Aug 1, 2017 7:12:41 PM
  • First "MCPool" block was mined on Aug 1, 2017 7:37:19 PM
  • VitaBTC mined the first two blocks
  • BCH is 103 blocks behind BTC as of block 12
  • Armchair math has BCH as somewhere between 10 and 15% the hash rate of BTC
  • The fork began at 13:20:00 UK time and kicked off shortly after when 7 blocks had been mined on the BTC chain at 14:26:14 UK time. This is because the 6th block after the timestamp 12:20 UTC is the last block the two chains share in common before the split off at the 7th.
  • MTP (described above) is actually the median point of the last 11 blocks but it is essentially the same.
  • This means the fork kicked off at block #478558 and started with block #478553

Latest updates

  • Added blocks 11 & 12
  • Added block 10
  • Cleaned out updates list

Blocks

Block 12
Aug 2, 2017 4:15:01 AM (+ 23 min 12 sec from last block, 8 hours 53 min 20 sec since first block, 14 hours 55 min 1 sec since fork) / Block #478570 / Size: 85032B, 83KB, 0.08MB / Transactions: 142
jMCY/Genesis Block 269-273 Hennessy Road Wan Chai Hong Kong/E!
Block 11
Aug 2, 2017 3:51:49 AM (+ 1 hour 44 min 48 sec from last block, 8 hours 30 min 8 sec since first block, 14 hours 31 min 49 sec since fork) / Block #478569 / Size: 375414B, 366KB, 0.35MB / Transactions: 563
iME>Y/Genesis Block 269-273 Hennessy Road Wan Chai Hong Kong/L۳7
Block 10
Aug 2, 2017 2:07:01 AM (+ 1 hour 28 min 32 sec from last block, 6 hours 45 min 20 sec since first block, 12 hours 47 min 1 sec since fork) / Block #478568 / Size: 376371B, 367KB, 0.35MB / Transactions: 790
hMCPool 1 Genesis Block 269-273 Hennessy Road Wan Chai Hong Kong Y%w;
Block 9
Aug 2, 2017 12:38:29 AM (+ 1 hour 38 min 29 sec from last block, 5 hours 16 minutes 48 sec since first block, 11 hours 18 min 29 sec since fork) / Block #478567 / Size: 420294B, 410KB, 0.4MB / Transactions: 105
gMY/Genesis Block 269-273 Hennessy Road Wan Chai Hong Kong/
Block 8
Aug 1, 2017 10:39:21 PM (+ 3 min 37 sec from last block, 3 hours 17 minutes 40 sec since first block, 9 hours 19 min 21 sec since fork) / Block #478566 / [thread] / Size: 9327B, 9KB, 0.00889MB / Transactions: 21
fM Y/Genesis Block 269-273 Hennessy Road Wan Chai Hong Kong/ 6
Block 7
Aug 1, 2017 10:35:44 PM (+ 30 min 29 sec from last block, 3 hours 14 minutes 3 sec since first block, 9 hours 15 min 44 sec since fork) / Block #478565 / Size: 107160B, 104KB, 0.1MB / Transactions: 165 / [thread]
ViaBTC eM/ViaBTC/Hello World!/g
Block 6
Aug 1, 2017 10:05:15 PM (+ 1 hour 27 min 31 sec from last block, 2 hours 43 minutes 34 sec since first block, 8 hours 45 min 15 sec since fork) / Block #478564 / Size: 462505B, 451KB, 0.44MB / Transactions: 570 / [[thread]
dMCPool 1 Genesis Block 269-273 Hennessy Road Wan Chai Hong Kong Y
Block 5
Aug 1, 2017 8:37:44 PM (+ 44 min 46 sec from last block, 1 hours 16 minutes 3 sec since first block, 7 hours 17 min 44 sec since fork) / Block #478563 5 / Size: 407906B, 398KB, 0.38MB / Transactions: 520 / [[thread]
cMCPool 1 Genesis Block 269-273 Hennessy Road Wan Chai Hong Kong Y؈% L
Block 4
Aug 1, 2017 7:52:58 PM (+ 15 min 39 sec from last block, 31 minutes 17 sec since first block, 6 hours 32 min 58 sec since fork) / Block #4785625 / Size: 89038B, 86KB, 0.084MB / Transactions: 502 / [ [thread]
bMCPool 1 Genesis Block 269-273 Hennessy Road Wan Chai Hong Kong Y S)
Block 3
Aug 1, 2017 7:37:19 PM (+ 4 min 13 sec from last block, 15 minutes 38 sec since first block, 6 hours 17 min 19 sec since fork) / Block #4785615 / Size: 20241B, 19KB, 0.019MB / Transactions: 26 / [ [thread]
aM_ʀY/Genesis Block 269-273 Hennessy Road Wan Chai Hong Kong/E"
Block 2
Aug 1, 2017 7:33:06 PM (+ 24 min 38 sec from last block, 11 minutes 25 sec since first block, 6 hours 13 min 6 sec since fork) / Block #478560 5 / Size: 43055B, 42KB, 0.04MB / Transactions: 75 / [[thread]
ViaBTC `M/ViaBTC/Hello World!/4s"~
Block 1
Aug 1, 2017 7:12:41 PM (+ 4 hour 56 min 27 sec after last block before fork, 5 hours 5 min 52 sec since fork) / Block #478559 5 / Size: 1915175B, 1870KB, 1.82MB (OVER 1MB - ALMOST 2MB!!) / Transactions: 6985 / [[thread]
ViaBTC _M*/ViaBTC/Welcome to the world, Shuya Yang!/q30c
I'll probably be asleep at the next block - I will update when I wake and try and keep it going to +24 or +36 hours from fork! :)
submitted by Inthewirelain to btc [link] [comments]

Either ATMP or scale.cash is bottlenecking the stress test

If you watch transactions as they hit the mempool (e.g. txstreet.com) , you'll notice that they tend to come in large batches, with several minutes elapsing in between batches. I've had scale.cash running and generating transactions during this interval, and noticed that the transactions I generate usually take several minutes before they're visible on block explores or on my local node's mempool. For example, this transaction was generated by my scale.cash webpage about 14 minutes earlier, but when I queried Bitcoin ABC, I see it's not there yet:
[email protected]:~$ abccli getrawtransaction b639cf06646a01a93f29cbc9b773755158bb712e2e5f3c10978f745e89341a39 error code: -5 error message: No such mempool transaction. ... 
Ten minutes later, I tried again, and this time it's there:
[email protected]:~$ abccli getrawtransaction b639cf06646a01a93f29cbc9b773755158bb712e2e5f3c10978f745e89341a39 0200000001c0486ca96c7a8d4f5b1ea68f419ca2c76c1ec3d0613ed11746ead1b4d1addc64000000006a473044022009f68f4c84dd7d94758c49dffb6e4ae28bf74588475352803177cbbe0e0e765c022036f01d76c176a82c645987929cf73cc80d6a3b500f1a79321be4095564431b2141210340a65a40cb472752045abf1a5990d6d85a1d6f71da7dde40dd8b15c179961b1dffffffff02460b0000000000001976a9147a1402392a64f64894296d2528cf907e4b76432488ac0000000000000000186a1673747265737374657374626974636f696e2e6361736800000000 
So something is bottlenecking transactions in between their generation in javascript in my web browser and the bulk of the full node network.
This could just be an issue with scale.cash's webservers. We don't know anything about how those servers work.
But it could also be the known AcceptToMemoryPool bottleneck. Perhaps what is happening is that a large batch of transactions comes in and fills a node's network buffers. Eventually, AcceptToMemoryPool() gets run, locks cs_main and cs_mempool, and runs through all of the transactions. The locking of cs_mempool prevents the networking threads from reading mempool and uploading the transactions to the next peer until this batch of transactions is finished processing. Once that happens, the networking code locks cs_mempool and prevents AcceptToMemoryPool from running, causing the socket reading code to fill its buffers while waiting for ATMP to run again. The process then repeats indefinitely, causing batched broadcasts of transactions instead of smooth trickles.
Note: I'm not 100% sure that this is how the ATMP code and locks work. I haven't read that section for a while. But it seems likely that the ATMP bottleneck could result in transaction batching. We're getting about 60 tx/sec average, so seems like we're getting close to the expected ATMP bottleneck level of 100 tx/sec average (20 MB/block) that was seen in the Gigablock Testnet Initiative. It's possible that their servers were more consistently powerful than what we have on mainnet, resulting in the ATMP bottleneck being lower.
submitted by jtoomim to btc [link] [comments]

Blockchain.info - So nutzt ihr den Blockchain explorer ... What is a Bitcoin Block Explorer What is a Bitcoin Block Explorer - YouTube Blockchain - How To Verify A Bitcoin Transaction And Get ... Blockchain tutorial 27: Bitcoin raw transaction and transaction id

The most popular and trusted block explorer and crypto transaction search engine. Hey there, it's Greg.. I'll let you know about cool website updates, or if something seriously interesting happens in bitcoin. The most popular and trusted block explorer and crypto transaction search engine. The Bitcoin.com Explorer provides block, transaction, and address data for the Bitcoin Cash (BCH) and Bitcoin (BTC) chains. The data is displayed within an awesome interface and is available in several different languages. The most popular and trusted block explorer and crypto transaction search engine.

[index] [27606] [36676] [12876] [4267] [44962] [44086] [4342] [722] [8860] [19188]

Blockchain.info - So nutzt ihr den Blockchain explorer ...

A short video explaining what a block explorer is. For the complete text guide visit: http://bit.ly/2DD1vQT Join our 7-day Bitcoin crash course absolutely fr... *LIVE BITCOIN TRANSACTION* How To Use A Block Explorer How To Check "Unconfirmed" Transactions In this video, I do a demo of a live bitcoin cash transactio... Blockchain/Bitcoin for beginners 6: blocks and mining, content and creation of bitcoin blocks - Duration: 46:48. Matt Thomas 11,082 views Support our channel by using the Brave browser, browse up to 3 times faster, no ads, get rewarded for browsing: http://bit.ly/35vHo0M Learn all about what ha... For more tips like these visit http://bodymindsuccess.com/bitcoin or subscribe to our channel

#