Merge bitcoin/bitcoin#22778: net processing: Reduce resource usage for inbound block-relay-only connections

9db82f1bca [net processing] Don't initialize TxRelay for non-tx-relay peers. (John Newbery)
b0a4ac9c26 [net processing] Add m_tx_relay_mutex to protect m_tx_relay ptr (John Newbery)
290a8dab02 [net processing] Comment all TxRelay members (John Newbery)
42e3250497 [net processing] [refactor] Move m_next_send_feefilter and m_fee_filter_sent (John Newbery)

Pull request description:

  block-relay-only connections are additional outbound connections that bitcoind makes since v0.19. They participate in block relay, but do not propagate transactions or addresses. They were introduced in #15759.

  When creating an outbound block-relay-only connection, since we know that we're never going to announce transactions over that connection, we can save on memory usage by not a `TxRelay` data structure for that connection. When receiving an inbound connection, we don't know whether the connection was opened by the peer as block-relay-only or not, and therefore we always construct a `TxRelay` data structure for inbound connections.

  However, it is possible to tell whether an inbound connection will ever request that we start announcing transactions to it. The `fRelay` field in the `version` message may be set to `0` to indicate that the peer does not wish to receive transaction announcements. The peer may later request that we start announcing transactions to it by sending a `filterload` or `filterclear` message, **but only if we have offered `NODE_BLOOM` services to that peer**. `NODE_BLOOM` services are disabled by default, and it has been recommended for some time that users not enable `NODE_BLOOM` services on public connections, for privacy and anti-DoS reasons.

  Therefore, if we have not offered `NODE_BLOOM` to the peer _and_ it has set `fRelay` to `0`, then we know that it will never request transaction announcements, and that we can save resources by not initializing the `TxRelay` data structure.

ACKs for top commit:
  MarcoFalke:
    review ACK 9db82f1bca 🖖
  dergoegge:
    Code review ACK 9db82f1bca
  naumenkogs:
    ACK 9db82f1bca

Tree-SHA512: 83a449a56cd6bf6ad05369f5ab91516e51b8c471c07ae38c886d51461e942d492ca34ae63d329c46e56d96d0baf59a3e34233e4289868f911db3b567072bdc41
This commit is contained in:
fanquake 2022-05-19 09:09:59 +01:00
commit 986bae8e72
No known key found for this signature in database
GPG Key ID: 2EEB9F5CC09526C1

View File

@ -239,36 +239,62 @@ struct Peer {
/** Whether this peer relays txs via wtxid */ /** Whether this peer relays txs via wtxid */
std::atomic<bool> m_wtxid_relay{false}; std::atomic<bool> m_wtxid_relay{false};
/** The feerate in the most recent BIP133 `feefilter` message sent to the peer.
* It is *not* a p2p protocol violation for the peer to send us
* transactions with a lower fee rate than this. See BIP133. */
CAmount m_fee_filter_sent{0};
/** Timestamp after which we will send the next BIP133 `feefilter` message
* to the peer. */
std::chrono::microseconds m_next_send_feefilter{0};
struct TxRelay { struct TxRelay {
mutable RecursiveMutex m_bloom_filter_mutex; mutable RecursiveMutex m_bloom_filter_mutex;
// We use m_relay_txs for two purposes - /** Whether the peer wishes to receive transaction announcements.
// a) it allows us to not relay tx invs before receiving the peer's version message *
// b) the peer may tell us in its version message that we should not relay tx invs * This is initially set based on the fRelay flag in the received
// unless it loads a bloom filter. * `version` message. If initially set to false, it can only be flipped
* to true if we have offered the peer NODE_BLOOM services and it sends
* us a `filterload` or `filterclear` message. See BIP37. */
bool m_relay_txs GUARDED_BY(m_bloom_filter_mutex){false}; bool m_relay_txs GUARDED_BY(m_bloom_filter_mutex){false};
/** A bloom filter for which transactions to announce to the peer. See BIP37. */
std::unique_ptr<CBloomFilter> m_bloom_filter PT_GUARDED_BY(m_bloom_filter_mutex) GUARDED_BY(m_bloom_filter_mutex){nullptr}; std::unique_ptr<CBloomFilter> m_bloom_filter PT_GUARDED_BY(m_bloom_filter_mutex) GUARDED_BY(m_bloom_filter_mutex){nullptr};
mutable RecursiveMutex m_tx_inventory_mutex; mutable RecursiveMutex m_tx_inventory_mutex;
/** A filter of all the txids and wtxids that the peer has announced to
* us or we have announced to the peer. We use this to avoid announcing
* the same txid/wtxid to a peer that already has the transaction. */
CRollingBloomFilter m_tx_inventory_known_filter GUARDED_BY(m_tx_inventory_mutex){50000, 0.000001}; CRollingBloomFilter m_tx_inventory_known_filter GUARDED_BY(m_tx_inventory_mutex){50000, 0.000001};
// Set of transaction ids we still have to announce. /** Set of transaction ids we still have to announce (txid for
// They are sorted by the mempool before relay, so the order is not important. * non-wtxid-relay peers, wtxid for wtxid-relay peers). We use the
* mempool to sort transactions in dependency order before relay, so
* this does not have to be sorted. */
std::set<uint256> m_tx_inventory_to_send; std::set<uint256> m_tx_inventory_to_send;
// Used for BIP35 mempool sending /** Whether the peer has requested us to send our complete mempool. Only
* permitted if the peer has NetPermissionFlags::Mempool. See BIP35. */
bool m_send_mempool GUARDED_BY(m_tx_inventory_mutex){false}; bool m_send_mempool GUARDED_BY(m_tx_inventory_mutex){false};
// Last time a "MEMPOOL" request was serviced. /** The last time a BIP35 `mempool` request was serviced. */
std::atomic<std::chrono::seconds> m_last_mempool_req{0s}; std::atomic<std::chrono::seconds> m_last_mempool_req{0s};
/** The next time after which we will send an `inv` message containing
* transaction announcements to this peer. */
std::chrono::microseconds m_next_inv_send_time{0}; std::chrono::microseconds m_next_inv_send_time{0};
/** Minimum fee rate with which to filter inv's to this node */ /** Minimum fee rate with which to filter transaction announcements to this node. See BIP133. */
std::atomic<CAmount> m_fee_filter_received{0}; std::atomic<CAmount> m_fee_filter_received{0};
CAmount m_fee_filter_sent{0};
std::chrono::microseconds m_next_send_feefilter{0};
}; };
/** Transaction relay data. Will be a nullptr if we're not relaying /* Initializes a TxRelay struct for this peer. Can be called at most once for a peer. */
* transactions with this peer (e.g. if it's a block-relay-only peer) */ TxRelay* SetTxRelay()
std::unique_ptr<TxRelay> m_tx_relay; {
LOCK(m_tx_relay_mutex);
Assume(!m_tx_relay);
m_tx_relay = std::make_unique<Peer::TxRelay>();
return m_tx_relay.get();
};
TxRelay* GetTxRelay()
{
return WITH_LOCK(m_tx_relay_mutex, return m_tx_relay.get());
};
/** A vector of addresses to send to the peer, limited to MAX_ADDR_TO_SEND. */ /** A vector of addresses to send to the peer, limited to MAX_ADDR_TO_SEND. */
std::vector<CAddress> m_addrs_to_send; std::vector<CAddress> m_addrs_to_send;
@ -328,10 +354,17 @@ struct Peer {
/** Work queue of items requested by this peer **/ /** Work queue of items requested by this peer **/
std::deque<CInv> m_getdata_requests GUARDED_BY(m_getdata_requests_mutex); std::deque<CInv> m_getdata_requests GUARDED_BY(m_getdata_requests_mutex);
explicit Peer(NodeId id, bool tx_relay) Peer(NodeId id)
: m_id(id) : m_id{id}
, m_tx_relay(tx_relay ? std::make_unique<TxRelay>() : nullptr)
{} {}
private:
Mutex m_tx_relay_mutex;
/** Transaction relay data. Will be a nullptr if we're not relaying
* transactions with this peer (e.g. if it's a block-relay-only peer or
* the peer has sent us fRelay=false with bloom filters disabled). */
std::unique_ptr<TxRelay> m_tx_relay GUARDED_BY(m_tx_relay_mutex);
}; };
using PeerRef = std::shared_ptr<Peer>; using PeerRef = std::shared_ptr<Peer>;
@ -883,10 +916,11 @@ static void PushAddress(Peer& peer, const CAddress& addr, FastRandomContext& ins
static void AddKnownTx(Peer& peer, const uint256& hash) static void AddKnownTx(Peer& peer, const uint256& hash)
{ {
if (peer.m_tx_relay != nullptr) { auto tx_relay = peer.GetTxRelay();
LOCK(peer.m_tx_relay->m_tx_inventory_mutex); if (!tx_relay) return;
peer.m_tx_relay->m_tx_inventory_known_filter.insert(hash);
} LOCK(tx_relay->m_tx_inventory_mutex);
tx_relay->m_tx_inventory_known_filter.insert(hash);
} }
std::chrono::microseconds PeerManagerImpl::NextInvToInbounds(std::chrono::microseconds now, std::chrono::microseconds PeerManagerImpl::NextInvToInbounds(std::chrono::microseconds now,
@ -1186,7 +1220,7 @@ void PeerManagerImpl::PushNodeVersion(CNode& pnode, const Peer& peer)
CService addr_you = addr.IsRoutable() && !IsProxy(addr) && addr.IsAddrV1Compatible() ? addr : CService(); CService addr_you = addr.IsRoutable() && !IsProxy(addr) && addr.IsAddrV1Compatible() ? addr : CService();
uint64_t your_services{addr.nServices}; uint64_t your_services{addr.nServices};
const bool tx_relay = !m_ignore_incoming_txs && peer.m_tx_relay != nullptr && !pnode.IsFeelerConn(); const bool tx_relay = !m_ignore_incoming_txs && !pnode.IsBlockOnlyConn() && !pnode.IsFeelerConn();
m_connman.PushMessage(&pnode, CNetMsgMaker(INIT_PROTO_VERSION).Make(NetMsgType::VERSION, PROTOCOL_VERSION, my_services, nTime, m_connman.PushMessage(&pnode, CNetMsgMaker(INIT_PROTO_VERSION).Make(NetMsgType::VERSION, PROTOCOL_VERSION, my_services, nTime,
your_services, addr_you, // Together the pre-version-31402 serialization of CAddress "addrYou" (without nTime) your_services, addr_you, // Together the pre-version-31402 serialization of CAddress "addrYou" (without nTime)
my_services, CService(), // Together the pre-version-31402 serialization of CAddress "addrMe" (without nTime) my_services, CService(), // Together the pre-version-31402 serialization of CAddress "addrMe" (without nTime)
@ -1241,7 +1275,7 @@ void PeerManagerImpl::InitializeNode(CNode *pnode)
m_node_states.emplace_hint(m_node_states.end(), std::piecewise_construct, std::forward_as_tuple(nodeid), std::forward_as_tuple(pnode->IsInboundConn())); m_node_states.emplace_hint(m_node_states.end(), std::piecewise_construct, std::forward_as_tuple(nodeid), std::forward_as_tuple(pnode->IsInboundConn()));
assert(m_txrequest.Count(nodeid) == 0); assert(m_txrequest.Count(nodeid) == 0);
} }
PeerRef peer = std::make_shared<Peer>(nodeid, /*tx_relay=*/ !pnode->IsBlockOnlyConn()); PeerRef peer = std::make_shared<Peer>(nodeid);
{ {
LOCK(m_peer_mutex); LOCK(m_peer_mutex);
m_peer_map.emplace_hint(m_peer_map.end(), nodeid, peer); m_peer_map.emplace_hint(m_peer_map.end(), nodeid, peer);
@ -1377,9 +1411,9 @@ bool PeerManagerImpl::GetNodeStateStats(NodeId nodeid, CNodeStateStats& stats) c
ping_wait = GetTime<std::chrono::microseconds>() - peer->m_ping_start.load(); ping_wait = GetTime<std::chrono::microseconds>() - peer->m_ping_start.load();
} }
if (peer->m_tx_relay != nullptr) { if (auto tx_relay = peer->GetTxRelay(); tx_relay != nullptr) {
stats.m_relay_txs = WITH_LOCK(peer->m_tx_relay->m_bloom_filter_mutex, return peer->m_tx_relay->m_relay_txs); stats.m_relay_txs = WITH_LOCK(tx_relay->m_bloom_filter_mutex, return tx_relay->m_relay_txs);
stats.m_fee_filter_received = peer->m_tx_relay->m_fee_filter_received.load(); stats.m_fee_filter_received = tx_relay->m_fee_filter_received.load();
} else { } else {
stats.m_relay_txs = false; stats.m_relay_txs = false;
stats.m_fee_filter_received = 0; stats.m_fee_filter_received = 0;
@ -1794,12 +1828,13 @@ void PeerManagerImpl::RelayTransaction(const uint256& txid, const uint256& wtxid
LOCK(m_peer_mutex); LOCK(m_peer_mutex);
for(auto& it : m_peer_map) { for(auto& it : m_peer_map) {
Peer& peer = *it.second; Peer& peer = *it.second;
if (!peer.m_tx_relay) continue; auto tx_relay = peer.GetTxRelay();
if (!tx_relay) continue;
const uint256& hash{peer.m_wtxid_relay ? wtxid : txid}; const uint256& hash{peer.m_wtxid_relay ? wtxid : txid};
LOCK(peer.m_tx_relay->m_tx_inventory_mutex); LOCK(tx_relay->m_tx_inventory_mutex);
if (!peer.m_tx_relay->m_tx_inventory_known_filter.contains(hash)) { if (!tx_relay->m_tx_inventory_known_filter.contains(hash)) {
peer.m_tx_relay->m_tx_inventory_to_send.insert(hash); tx_relay->m_tx_inventory_to_send.insert(hash);
} }
}; };
} }
@ -1948,11 +1983,11 @@ void PeerManagerImpl::ProcessGetBlockData(CNode& pfrom, Peer& peer, const CInv&
} else if (inv.IsMsgFilteredBlk()) { } else if (inv.IsMsgFilteredBlk()) {
bool sendMerkleBlock = false; bool sendMerkleBlock = false;
CMerkleBlock merkleBlock; CMerkleBlock merkleBlock;
if (peer.m_tx_relay != nullptr) { if (auto tx_relay = peer.GetTxRelay(); tx_relay != nullptr) {
LOCK(peer.m_tx_relay->m_bloom_filter_mutex); LOCK(tx_relay->m_bloom_filter_mutex);
if (peer.m_tx_relay->m_bloom_filter) { if (tx_relay->m_bloom_filter) {
sendMerkleBlock = true; sendMerkleBlock = true;
merkleBlock = CMerkleBlock(*pblock, *peer.m_tx_relay->m_bloom_filter); merkleBlock = CMerkleBlock(*pblock, *tx_relay->m_bloom_filter);
} }
} }
if (sendMerkleBlock) { if (sendMerkleBlock) {
@ -2033,13 +2068,15 @@ void PeerManagerImpl::ProcessGetData(CNode& pfrom, Peer& peer, const std::atomic
{ {
AssertLockNotHeld(cs_main); AssertLockNotHeld(cs_main);
auto tx_relay = peer.GetTxRelay();
std::deque<CInv>::iterator it = peer.m_getdata_requests.begin(); std::deque<CInv>::iterator it = peer.m_getdata_requests.begin();
std::vector<CInv> vNotFound; std::vector<CInv> vNotFound;
const CNetMsgMaker msgMaker(pfrom.GetCommonVersion()); const CNetMsgMaker msgMaker(pfrom.GetCommonVersion());
const auto now{GetTime<std::chrono::seconds>()}; const auto now{GetTime<std::chrono::seconds>()};
// Get last mempool request time // Get last mempool request time
const auto mempool_req = peer.m_tx_relay != nullptr ? peer.m_tx_relay->m_last_mempool_req.load() : std::chrono::seconds::min(); const auto mempool_req = tx_relay != nullptr ? tx_relay->m_last_mempool_req.load() : std::chrono::seconds::min();
// Process as many TX items from the front of the getdata queue as // Process as many TX items from the front of the getdata queue as
// possible, since they're common and it's efficient to batch process // possible, since they're common and it's efficient to batch process
@ -2052,8 +2089,9 @@ void PeerManagerImpl::ProcessGetData(CNode& pfrom, Peer& peer, const std::atomic
const CInv &inv = *it++; const CInv &inv = *it++;
if (peer.m_tx_relay == nullptr) { if (tx_relay == nullptr) {
// Ignore GETDATA requests for transactions from blocks-only peers. // Ignore GETDATA requests for transactions from block-relay-only
// peers and peers that asked us not to announce transactions.
continue; continue;
} }
@ -2080,7 +2118,7 @@ void PeerManagerImpl::ProcessGetData(CNode& pfrom, Peer& peer, const std::atomic
} }
for (const uint256& parent_txid : parent_ids_to_add) { for (const uint256& parent_txid : parent_ids_to_add) {
// Relaying a transaction with a recent but unconfirmed parent. // Relaying a transaction with a recent but unconfirmed parent.
if (WITH_LOCK(peer.m_tx_relay->m_tx_inventory_mutex, return !peer.m_tx_relay->m_tx_inventory_known_filter.contains(parent_txid))) { if (WITH_LOCK(tx_relay->m_tx_inventory_mutex, return !tx_relay->m_tx_inventory_known_filter.contains(parent_txid))) {
LOCK(cs_main); LOCK(cs_main);
State(pfrom.GetId())->m_recently_announced_invs.insert(parent_txid); State(pfrom.GetId())->m_recently_announced_invs.insert(parent_txid);
} }
@ -2715,10 +2753,16 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
// set nodes not capable of serving the complete blockchain history as "limited nodes" // set nodes not capable of serving the complete blockchain history as "limited nodes"
pfrom.m_limited_node = (!(nServices & NODE_NETWORK) && (nServices & NODE_NETWORK_LIMITED)); pfrom.m_limited_node = (!(nServices & NODE_NETWORK) && (nServices & NODE_NETWORK_LIMITED));
if (peer->m_tx_relay != nullptr) { // We only initialize the m_tx_relay data structure if:
// - this isn't an outbound block-relay-only connection; and
// - fRelay=true or we're offering NODE_BLOOM to this peer
// (NODE_BLOOM means that the peer may turn on tx relay later)
if (!pfrom.IsBlockOnlyConn() &&
(fRelay || (pfrom.GetLocalServices() & NODE_BLOOM))) {
auto* const tx_relay = peer->SetTxRelay();
{ {
LOCK(peer->m_tx_relay->m_bloom_filter_mutex); LOCK(tx_relay->m_bloom_filter_mutex);
peer->m_tx_relay->m_relay_txs = fRelay; // set to true after we get the first filter* message tx_relay->m_relay_txs = fRelay; // set to true after we get the first filter* message
} }
if (fRelay) pfrom.m_relays_txs = true; if (fRelay) pfrom.m_relays_txs = true;
} }
@ -3038,7 +3082,7 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
// Reject tx INVs when the -blocksonly setting is enabled, or this is a // Reject tx INVs when the -blocksonly setting is enabled, or this is a
// block-relay-only peer // block-relay-only peer
bool reject_tx_invs{m_ignore_incoming_txs || (peer->m_tx_relay == nullptr)}; bool reject_tx_invs{m_ignore_incoming_txs || pfrom.IsBlockOnlyConn()};
// Allow peers with relay permission to send data other than blocks in blocks only mode // Allow peers with relay permission to send data other than blocks in blocks only mode
if (pfrom.HasPermission(NetPermissionFlags::Relay)) { if (pfrom.HasPermission(NetPermissionFlags::Relay)) {
@ -3311,9 +3355,9 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
if (msg_type == NetMsgType::TX) { if (msg_type == NetMsgType::TX) {
// Stop processing the transaction early if // Stop processing the transaction early if
// 1) We are in blocks only mode and peer has no relay permission // 1) We are in blocks only mode and peer has no relay permission; OR
// 2) This peer is a block-relay-only peer // 2) This peer is a block-relay-only peer
if ((m_ignore_incoming_txs && !pfrom.HasPermission(NetPermissionFlags::Relay)) || (peer->m_tx_relay == nullptr)) { if ((m_ignore_incoming_txs && !pfrom.HasPermission(NetPermissionFlags::Relay)) || pfrom.IsBlockOnlyConn()) {
LogPrint(BCLog::NET, "transaction sent in violation of protocol peer=%d\n", pfrom.GetId()); LogPrint(BCLog::NET, "transaction sent in violation of protocol peer=%d\n", pfrom.GetId());
pfrom.fDisconnect = true; pfrom.fDisconnect = true;
return; return;
@ -3919,9 +3963,9 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
return; return;
} }
if (peer->m_tx_relay != nullptr) { if (auto tx_relay = peer->GetTxRelay(); tx_relay != nullptr) {
LOCK(peer->m_tx_relay->m_tx_inventory_mutex); LOCK(tx_relay->m_tx_inventory_mutex);
peer->m_tx_relay->m_send_mempool = true; tx_relay->m_send_mempool = true;
} }
return; return;
} }
@ -4014,16 +4058,13 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
{ {
// There is no excuse for sending a too-large filter // There is no excuse for sending a too-large filter
Misbehaving(pfrom.GetId(), 100, "too-large bloom filter"); Misbehaving(pfrom.GetId(), 100, "too-large bloom filter");
} } else if (auto tx_relay = peer->GetTxRelay(); tx_relay != nullptr) {
else if (peer->m_tx_relay != nullptr)
{
{ {
LOCK(peer->m_tx_relay->m_bloom_filter_mutex); LOCK(tx_relay->m_bloom_filter_mutex);
peer->m_tx_relay->m_bloom_filter.reset(new CBloomFilter(filter)); tx_relay->m_bloom_filter.reset(new CBloomFilter(filter));
peer->m_tx_relay->m_relay_txs = true; tx_relay->m_relay_txs = true;
} }
pfrom.m_bloom_filter_loaded = true; pfrom.m_bloom_filter_loaded = true;
pfrom.m_relays_txs = true;
} }
return; return;
} }
@ -4042,10 +4083,10 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
bool bad = false; bool bad = false;
if (vData.size() > MAX_SCRIPT_ELEMENT_SIZE) { if (vData.size() > MAX_SCRIPT_ELEMENT_SIZE) {
bad = true; bad = true;
} else if (peer->m_tx_relay != nullptr) { } else if (auto tx_relay = peer->GetTxRelay(); tx_relay != nullptr) {
LOCK(peer->m_tx_relay->m_bloom_filter_mutex); LOCK(tx_relay->m_bloom_filter_mutex);
if (peer->m_tx_relay->m_bloom_filter) { if (tx_relay->m_bloom_filter) {
peer->m_tx_relay->m_bloom_filter->insert(vData); tx_relay->m_bloom_filter->insert(vData);
} else { } else {
bad = true; bad = true;
} }
@ -4062,14 +4103,13 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
pfrom.fDisconnect = true; pfrom.fDisconnect = true;
return; return;
} }
if (peer->m_tx_relay == nullptr) { auto tx_relay = peer->GetTxRelay();
return; if (!tx_relay) return;
}
{ {
LOCK(peer->m_tx_relay->m_bloom_filter_mutex); LOCK(tx_relay->m_bloom_filter_mutex);
peer->m_tx_relay->m_bloom_filter = nullptr; tx_relay->m_bloom_filter = nullptr;
peer->m_tx_relay->m_relay_txs = true; tx_relay->m_relay_txs = true;
} }
pfrom.m_bloom_filter_loaded = false; pfrom.m_bloom_filter_loaded = false;
pfrom.m_relays_txs = true; pfrom.m_relays_txs = true;
@ -4080,8 +4120,8 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
CAmount newFeeFilter = 0; CAmount newFeeFilter = 0;
vRecv >> newFeeFilter; vRecv >> newFeeFilter;
if (MoneyRange(newFeeFilter)) { if (MoneyRange(newFeeFilter)) {
if (peer->m_tx_relay != nullptr) { if (auto tx_relay = peer->GetTxRelay(); tx_relay != nullptr) {
peer->m_tx_relay->m_fee_filter_received = newFeeFilter; tx_relay->m_fee_filter_received = newFeeFilter;
} }
LogPrint(BCLog::NET, "received: feefilter of %s from peer=%d\n", CFeeRate(newFeeFilter).ToString(), pfrom.GetId()); LogPrint(BCLog::NET, "received: feefilter of %s from peer=%d\n", CFeeRate(newFeeFilter).ToString(), pfrom.GetId());
} }
@ -4542,10 +4582,12 @@ void PeerManagerImpl::MaybeSendAddr(CNode& node, Peer& peer, std::chrono::micros
void PeerManagerImpl::MaybeSendFeefilter(CNode& pto, Peer& peer, std::chrono::microseconds current_time) void PeerManagerImpl::MaybeSendFeefilter(CNode& pto, Peer& peer, std::chrono::microseconds current_time)
{ {
if (m_ignore_incoming_txs) return; if (m_ignore_incoming_txs) return;
if (!peer.m_tx_relay) return;
if (pto.GetCommonVersion() < FEEFILTER_VERSION) return; if (pto.GetCommonVersion() < FEEFILTER_VERSION) return;
// peers with the forcerelay permission should not filter txs to us // peers with the forcerelay permission should not filter txs to us
if (pto.HasPermission(NetPermissionFlags::ForceRelay)) return; if (pto.HasPermission(NetPermissionFlags::ForceRelay)) return;
// Don't send feefilter messages to outbound block-relay-only peers since they should never announce
// transactions to us, regardless of feefilter state.
if (pto.IsBlockOnlyConn()) return;
CAmount currentFilter = m_mempool.GetMinFee(gArgs.GetIntArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000).GetFeePerK(); CAmount currentFilter = m_mempool.GetMinFee(gArgs.GetIntArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000).GetFeePerK();
static FeeFilterRounder g_filter_rounder{CFeeRate{DEFAULT_MIN_RELAY_TX_FEE}}; static FeeFilterRounder g_filter_rounder{CFeeRate{DEFAULT_MIN_RELAY_TX_FEE}};
@ -4556,27 +4598,27 @@ void PeerManagerImpl::MaybeSendFeefilter(CNode& pto, Peer& peer, std::chrono::mi
currentFilter = MAX_MONEY; currentFilter = MAX_MONEY;
} else { } else {
static const CAmount MAX_FILTER{g_filter_rounder.round(MAX_MONEY)}; static const CAmount MAX_FILTER{g_filter_rounder.round(MAX_MONEY)};
if (peer.m_tx_relay->m_fee_filter_sent == MAX_FILTER) { if (peer.m_fee_filter_sent == MAX_FILTER) {
// Send the current filter if we sent MAX_FILTER previously // Send the current filter if we sent MAX_FILTER previously
// and made it out of IBD. // and made it out of IBD.
peer.m_tx_relay->m_next_send_feefilter = 0us; peer.m_next_send_feefilter = 0us;
} }
} }
if (current_time > peer.m_tx_relay->m_next_send_feefilter) { if (current_time > peer.m_next_send_feefilter) {
CAmount filterToSend = g_filter_rounder.round(currentFilter); CAmount filterToSend = g_filter_rounder.round(currentFilter);
// We always have a fee filter of at least minRelayTxFee // We always have a fee filter of at least minRelayTxFee
filterToSend = std::max(filterToSend, ::minRelayTxFee.GetFeePerK()); filterToSend = std::max(filterToSend, ::minRelayTxFee.GetFeePerK());
if (filterToSend != peer.m_tx_relay->m_fee_filter_sent) { if (filterToSend != peer.m_fee_filter_sent) {
m_connman.PushMessage(&pto, CNetMsgMaker(pto.GetCommonVersion()).Make(NetMsgType::FEEFILTER, filterToSend)); m_connman.PushMessage(&pto, CNetMsgMaker(pto.GetCommonVersion()).Make(NetMsgType::FEEFILTER, filterToSend));
peer.m_tx_relay->m_fee_filter_sent = filterToSend; peer.m_fee_filter_sent = filterToSend;
} }
peer.m_tx_relay->m_next_send_feefilter = GetExponentialRand(current_time, AVG_FEEFILTER_BROADCAST_INTERVAL); peer.m_next_send_feefilter = GetExponentialRand(current_time, AVG_FEEFILTER_BROADCAST_INTERVAL);
} }
// If the fee filter has changed substantially and it's still more than MAX_FEEFILTER_CHANGE_DELAY // If the fee filter has changed substantially and it's still more than MAX_FEEFILTER_CHANGE_DELAY
// until scheduled broadcast, then move the broadcast to within MAX_FEEFILTER_CHANGE_DELAY. // until scheduled broadcast, then move the broadcast to within MAX_FEEFILTER_CHANGE_DELAY.
else if (current_time + MAX_FEEFILTER_CHANGE_DELAY < peer.m_tx_relay->m_next_send_feefilter && else if (current_time + MAX_FEEFILTER_CHANGE_DELAY < peer.m_next_send_feefilter &&
(currentFilter < 3 * peer.m_tx_relay->m_fee_filter_sent / 4 || currentFilter > 4 * peer.m_tx_relay->m_fee_filter_sent / 3)) { (currentFilter < 3 * peer.m_fee_filter_sent / 4 || currentFilter > 4 * peer.m_fee_filter_sent / 3)) {
peer.m_tx_relay->m_next_send_feefilter = current_time + GetRandomDuration<std::chrono::microseconds>(MAX_FEEFILTER_CHANGE_DELAY); peer.m_next_send_feefilter = current_time + GetRandomDuration<std::chrono::microseconds>(MAX_FEEFILTER_CHANGE_DELAY);
} }
} }
@ -4838,45 +4880,45 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
peer->m_blocks_for_inv_relay.clear(); peer->m_blocks_for_inv_relay.clear();
} }
if (peer->m_tx_relay != nullptr) { if (auto tx_relay = peer->GetTxRelay(); tx_relay != nullptr) {
LOCK(peer->m_tx_relay->m_tx_inventory_mutex); LOCK(tx_relay->m_tx_inventory_mutex);
// Check whether periodic sends should happen // Check whether periodic sends should happen
bool fSendTrickle = pto->HasPermission(NetPermissionFlags::NoBan); bool fSendTrickle = pto->HasPermission(NetPermissionFlags::NoBan);
if (peer->m_tx_relay->m_next_inv_send_time < current_time) { if (tx_relay->m_next_inv_send_time < current_time) {
fSendTrickle = true; fSendTrickle = true;
if (pto->IsInboundConn()) { if (pto->IsInboundConn()) {
peer->m_tx_relay->m_next_inv_send_time = NextInvToInbounds(current_time, INBOUND_INVENTORY_BROADCAST_INTERVAL); tx_relay->m_next_inv_send_time = NextInvToInbounds(current_time, INBOUND_INVENTORY_BROADCAST_INTERVAL);
} else { } else {
peer->m_tx_relay->m_next_inv_send_time = GetExponentialRand(current_time, OUTBOUND_INVENTORY_BROADCAST_INTERVAL); tx_relay->m_next_inv_send_time = GetExponentialRand(current_time, OUTBOUND_INVENTORY_BROADCAST_INTERVAL);
} }
} }
// Time to send but the peer has requested we not relay transactions. // Time to send but the peer has requested we not relay transactions.
if (fSendTrickle) { if (fSendTrickle) {
LOCK(peer->m_tx_relay->m_bloom_filter_mutex); LOCK(tx_relay->m_bloom_filter_mutex);
if (!peer->m_tx_relay->m_relay_txs) peer->m_tx_relay->m_tx_inventory_to_send.clear(); if (!tx_relay->m_relay_txs) tx_relay->m_tx_inventory_to_send.clear();
} }
// Respond to BIP35 mempool requests // Respond to BIP35 mempool requests
if (fSendTrickle && peer->m_tx_relay->m_send_mempool) { if (fSendTrickle && tx_relay->m_send_mempool) {
auto vtxinfo = m_mempool.infoAll(); auto vtxinfo = m_mempool.infoAll();
peer->m_tx_relay->m_send_mempool = false; tx_relay->m_send_mempool = false;
const CFeeRate filterrate{peer->m_tx_relay->m_fee_filter_received.load()}; const CFeeRate filterrate{tx_relay->m_fee_filter_received.load()};
LOCK(peer->m_tx_relay->m_bloom_filter_mutex); LOCK(tx_relay->m_bloom_filter_mutex);
for (const auto& txinfo : vtxinfo) { for (const auto& txinfo : vtxinfo) {
const uint256& hash = peer->m_wtxid_relay ? txinfo.tx->GetWitnessHash() : txinfo.tx->GetHash(); const uint256& hash = peer->m_wtxid_relay ? txinfo.tx->GetWitnessHash() : txinfo.tx->GetHash();
CInv inv(peer->m_wtxid_relay ? MSG_WTX : MSG_TX, hash); CInv inv(peer->m_wtxid_relay ? MSG_WTX : MSG_TX, hash);
peer->m_tx_relay->m_tx_inventory_to_send.erase(hash); tx_relay->m_tx_inventory_to_send.erase(hash);
// Don't send transactions that peers will not put into their mempool // Don't send transactions that peers will not put into their mempool
if (txinfo.fee < filterrate.GetFee(txinfo.vsize)) { if (txinfo.fee < filterrate.GetFee(txinfo.vsize)) {
continue; continue;
} }
if (peer->m_tx_relay->m_bloom_filter) { if (tx_relay->m_bloom_filter) {
if (!peer->m_tx_relay->m_bloom_filter->IsRelevantAndUpdate(*txinfo.tx)) continue; if (!tx_relay->m_bloom_filter->IsRelevantAndUpdate(*txinfo.tx)) continue;
} }
peer->m_tx_relay->m_tx_inventory_known_filter.insert(hash); tx_relay->m_tx_inventory_known_filter.insert(hash);
// Responses to MEMPOOL requests bypass the m_recently_announced_invs filter. // Responses to MEMPOOL requests bypass the m_recently_announced_invs filter.
vInv.push_back(inv); vInv.push_back(inv);
if (vInv.size() == MAX_INV_SZ) { if (vInv.size() == MAX_INV_SZ) {
@ -4884,18 +4926,18 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
vInv.clear(); vInv.clear();
} }
} }
peer->m_tx_relay->m_last_mempool_req = std::chrono::duration_cast<std::chrono::seconds>(current_time); tx_relay->m_last_mempool_req = std::chrono::duration_cast<std::chrono::seconds>(current_time);
} }
// Determine transactions to relay // Determine transactions to relay
if (fSendTrickle) { if (fSendTrickle) {
// Produce a vector with all candidates for sending // Produce a vector with all candidates for sending
std::vector<std::set<uint256>::iterator> vInvTx; std::vector<std::set<uint256>::iterator> vInvTx;
vInvTx.reserve(peer->m_tx_relay->m_tx_inventory_to_send.size()); vInvTx.reserve(tx_relay->m_tx_inventory_to_send.size());
for (std::set<uint256>::iterator it = peer->m_tx_relay->m_tx_inventory_to_send.begin(); it != peer->m_tx_relay->m_tx_inventory_to_send.end(); it++) { for (std::set<uint256>::iterator it = tx_relay->m_tx_inventory_to_send.begin(); it != tx_relay->m_tx_inventory_to_send.end(); it++) {
vInvTx.push_back(it); vInvTx.push_back(it);
} }
const CFeeRate filterrate{peer->m_tx_relay->m_fee_filter_received.load()}; const CFeeRate filterrate{tx_relay->m_fee_filter_received.load()};
// Topologically and fee-rate sort the inventory we send for privacy and priority reasons. // Topologically and fee-rate sort the inventory we send for privacy and priority reasons.
// A heap is used so that not all items need sorting if only a few are being sent. // A heap is used so that not all items need sorting if only a few are being sent.
CompareInvMempoolOrder compareInvMempoolOrder(&m_mempool, peer->m_wtxid_relay); CompareInvMempoolOrder compareInvMempoolOrder(&m_mempool, peer->m_wtxid_relay);
@ -4903,7 +4945,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
// No reason to drain out at many times the network's capacity, // No reason to drain out at many times the network's capacity,
// especially since we have many peers and some will draw much shorter delays. // especially since we have many peers and some will draw much shorter delays.
unsigned int nRelayedTransactions = 0; unsigned int nRelayedTransactions = 0;
LOCK(peer->m_tx_relay->m_bloom_filter_mutex); LOCK(tx_relay->m_bloom_filter_mutex);
while (!vInvTx.empty() && nRelayedTransactions < INVENTORY_BROADCAST_MAX) { while (!vInvTx.empty() && nRelayedTransactions < INVENTORY_BROADCAST_MAX) {
// Fetch the top element from the heap // Fetch the top element from the heap
std::pop_heap(vInvTx.begin(), vInvTx.end(), compareInvMempoolOrder); std::pop_heap(vInvTx.begin(), vInvTx.end(), compareInvMempoolOrder);
@ -4912,9 +4954,9 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
uint256 hash = *it; uint256 hash = *it;
CInv inv(peer->m_wtxid_relay ? MSG_WTX : MSG_TX, hash); CInv inv(peer->m_wtxid_relay ? MSG_WTX : MSG_TX, hash);
// Remove it from the to-be-sent set // Remove it from the to-be-sent set
peer->m_tx_relay->m_tx_inventory_to_send.erase(it); tx_relay->m_tx_inventory_to_send.erase(it);
// Check if not in the filter already // Check if not in the filter already
if (peer->m_tx_relay->m_tx_inventory_known_filter.contains(hash)) { if (tx_relay->m_tx_inventory_known_filter.contains(hash)) {
continue; continue;
} }
// Not in the mempool anymore? don't bother sending it. // Not in the mempool anymore? don't bother sending it.
@ -4928,7 +4970,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
if (txinfo.fee < filterrate.GetFee(txinfo.vsize)) { if (txinfo.fee < filterrate.GetFee(txinfo.vsize)) {
continue; continue;
} }
if (peer->m_tx_relay->m_bloom_filter && !peer->m_tx_relay->m_bloom_filter->IsRelevantAndUpdate(*txinfo.tx)) continue; if (tx_relay->m_bloom_filter && !tx_relay->m_bloom_filter->IsRelevantAndUpdate(*txinfo.tx)) continue;
// Send // Send
State(pto->GetId())->m_recently_announced_invs.insert(hash); State(pto->GetId())->m_recently_announced_invs.insert(hash);
vInv.push_back(inv); vInv.push_back(inv);
@ -4955,14 +4997,14 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
m_connman.PushMessage(pto, msgMaker.Make(NetMsgType::INV, vInv)); m_connman.PushMessage(pto, msgMaker.Make(NetMsgType::INV, vInv));
vInv.clear(); vInv.clear();
} }
peer->m_tx_relay->m_tx_inventory_known_filter.insert(hash); tx_relay->m_tx_inventory_known_filter.insert(hash);
if (hash != txid) { if (hash != txid) {
// Insert txid into m_tx_inventory_known_filter, even for // Insert txid into m_tx_inventory_known_filter, even for
// wtxidrelay peers. This prevents re-adding of // wtxidrelay peers. This prevents re-adding of
// unconfirmed parents to the recently_announced // unconfirmed parents to the recently_announced
// filter, when a child tx is requested. See // filter, when a child tx is requested. See
// ProcessGetData(). // ProcessGetData().
peer->m_tx_relay->m_tx_inventory_known_filter.insert(txid); tx_relay->m_tx_inventory_known_filter.insert(txid);
} }
} }
} }