SRBMiner-Multi adds support for Talleo

SRBMiner-Multi added initial support for Talleo’s algorithm in version 0.3.0, but it was a dud release that did stop mining with error 0x4002. Version 0.3.1 fixed the error, but still has low performance when mining with CPU due to incorrect detection of usable threads. Thread count should be L3 cache size divided by 256 kB.

How to verify payments using Talleo daemon.

Payments can be verified using payment ID, wallet address and private view key of the wallet.

The steps to verify payments using the RPC JSON API are:

  1. Get current block height using “/getinfo”.
  2. Pass payment ID to “/get_transaction_hashes_by_payment_id” to get list of transaction hashes.
    • If supported by daemon, set startIndex to 0 or first block height to include, and set endIndex to current block height – 10, so it won’t return transactions with less than 10 confirmations, includeUnconfirmed should be set to false.
  3. Iterate through transaction hashes with “/get_transaction_details_by_hashes”.
    • If your daemon doesn’t support filtering by includeUnconfirmed, check for transactions where inBlockchain is true.
    • Verify that unlockTime is positive number less than current block height and that blockIndex is less than current block height – unlockTime.
  4. Check that the blocks containing the transactions are not orphaned using block hashes from step 3 and “/get_blocks_details_by_hashes”.
  5. Using “/get_amounts_for_account” check that the transactions contain enough outputs to the wallet, you need transaction hash, wallet address and private view key.

API update for mining pools

Node daemon (Talleod) version 0.4.6 introduced new field in response to “getblocktemplate” API call. Field “num_transactions” now contain number of transactions included in the block template excluding the coinbase transaction. This can be directly used by mining pool to decide when the current block template might need to be refreshed…

Mining pool can either refresh the block template when the field value changes from 0 to non-zero, or when it changes from one non-zero to any other non-zero value while block height stays the same.

Daemon crashing on sync…

We have noticed Talleod crashing on Linux when it tries to sync with other nodes and one of the received transactions has very high anonymity level. This did cause buffer overflow in transaction validation. As a fix, we are limiting mixin count to maximum of 50.

Mining pools getting lots of rejected blocks

Due to block validation rule change at block height 10000, back-end software for mining pools need a patch. Every time a block submission is attempted, cached block template must be discarded, disregarding result of the block submission. This is because the cached block template might not have required amount of transactions and daemon has rejected it for that specific reason.

Missing msvcr120.dll when running Windows applications

Some of our users have reported that xmr-stak linked from our official website fails to run with error message saying msvcr120.dll can’t be found. This file is part of Microsoft Visual C++ 2013 run-time. As our builds are compiled using Visual C++ 2017 (15.9), we have not bundled installer of the older run-time libraries.

There is known issue with some of the builds of Visual Studio C++ 2013 run-time installers that cause the files getting deleted if both 32-bit and 64-bit versions are installed. For more information, please read the official Knowledge Base article (KB3138367).

Network stall on December 7th, 2019

Network stalled on block height 10008 due to logic flaw in pool software on official pool. It was not requesting new block from daemon after daemon rejected previous block submission.

We rewrote the code that refreshes miner jobs to not blindly compare block height of the current job with block height of the network and instead refresh the block after every block submission attempt.

The original code was polling for new block template every 1 second, but didn’t refresh the miner jobs until block height changed. This was obviously incorrect. We restricted the loop to only initial startup of the pool.