RFC 6677 DNS Transport over TCP – Implementation Requirements (2016)

Posted by 1vuio0pswjnm7 5 days ago

Counter22Comment21OpenOriginal

Comments

Comment by 1vuio0pswjnm7 4 hours ago

RFC 7766

3. Terminology

o Pipelining: the sending of multiple queries and responses over a single TCP connection but not waiting for any outstanding replies before sending another query.

o Out-of-Order Processing: The processing of queries concurrently and the returning of individual responses as soon as they are available, possibly out of order. This will most likely occur in recursive servers; however, it is possible in authoritative servers that, for example, have different backend data stores.

6.1. Current Practices

Other more modern protocols (e.g., HTTP/1.1 [RFC7230], HTTP/2 [RFC7540]) have support by default for persistent TCP connections for all requests. Connections are then normally closed via a 'connection close' signal from one party.

6.2.1. Connection Reuse

To amortise connection setup costs, both clients and servers SHOULD support connection reuse by sending multiple queries and responses over a single persistent TCP connection.

When sending multiple queries over a TCP connection, clients MUST NOT reuse the DNS Message ID of an in-flight query on that connection in order to avoid Message ID collisions. This is especially important if the server could be performing out-of-order processing (see Section 7).

6.2.1.1. Query Pipelining

In order to achieve performance on par with UDP, DNS clients SHOULD pipeline their queries.

It is likely that DNS servers need to process pipelined queries concurrently and also send out-of-order responses over TCP in order to provide the level of performance possible with UDP transport.

DNS servers (especially recursive) MUST expect to receive pipelined queries. The server SHOULD process TCP queries concurrently, just as it would for UDP. The server SHOULD answer all pipelined queries, even if they are received in quick succession. The handling of responses to pipelined queries is covered in Section 7.

7. Response Reordering

Authoritative servers and recursive resolvers are RECOMMENDED to support the preparing of responses in parallel and sending them out of order, regardless of the transport protocol in use.

In order to achieve performance on par with UDP, recursive resolvers SHOULD process TCP queries in parallel and return individual responses as soon as they are available, possibly out of order.

[HTTP/1.1 pipelining can only do ordered responses]

Comment by themafia 4 days ago

> The growing deployment of DNS Security (DNSSEC) and IPv6 has increased response sizes and therefore the use of TCP.

Yes, but doesn't IPv6 also increase the "maximum safe UDP packet size" from 512 bytes to 1280?

> Existing deployments of DNSSEC [RFC4033] have shown that truncation at the 512-byte boundary is now commonplace. For example, a Non-Existent Domain (NXDOMAIN) (RCODE == 3) response from a DNSSEC-signed zone using NextSECure 3 (NSEC3) [RFC5155] is almost invariably larger than 512 bytes.

This has been a flagged issue in DNSSEC since it was originally considered. This was a massive oversight on their part and was only added because DNSSEC originally made it quite easy to probe entire DNS trees and expose obscured RRs.

> The MTU most commonly found in the core of the Internet is around 1500 bytes, and even that limit is routinely exceeded by DNSSEC-signed responses.

> Stub resolver implementations (e.g., an operating system's DNS resolution library) MUST support TCP since to do otherwise would limit the interoperability between their own clients and upstream servers.

Fair enough but are network clients actually meant to use DNSSEC? Isn't this just an issue for authoritative and recursive DNSSEC resolvers to and down the roots?

Comment by bastard_op 4 days ago

>> The growing deployment of DNS Security (DNSSEC) and IPv6 has increased response sizes and therefore the use of TCP. > Yes, but doesn't IPv6 also increase the "maximum safe UDP packet size" from 512 bytes to 1280?

DNS mostly has to support larger sizes, and has for decades for things like svc/txt records used for various encryption and large blocks of text. Having worked for a registrar and dealing with ddos, not much you can do but filter more intelligently. There are ddos appliances/services built just to deal with volumetric queries from hosts for such reason.

Comment by tptacek 4 days ago

Just to add real quick: there is not in fact a meaningful growing deployment of DNSSEC --- in fact, in North America and the western commercial Internet, the opposite thing is true: the number of signed zones has decreased. This is especially stark if you look at the true figure of merit, DNSSEC deployment on popular zones (take the Tranco academic research ranking of popular zones as a model):

https://dnssecmenot.fly.dev/

Comment by Hizonner 4 days ago

> Yes, but doesn't IPv6 also increase the "maximum safe UDP packet size" from 512 bytes to 1280?

Sure would be nice if people used IPv6. Even if you're actually sending data over IPv6, that doesn't mean the DNS lookups are going over IPv6. Infrastructure like that lags.

> This has been a flagged issue in DNSSEC since it was originally considered. This was a massive oversight on their part and was only added because DNSSEC originally made it quite easy to probe entire DNS trees and expose obscured RRs.

... probably because the people who originally designed DNSSEC (and DNS) couldn't believe that people would be crazy enough to try to keep their DNS records secret (or run split address spaces, for that matter). But anyway, whatever the reason, the replies are big and that has to be dealt with.

> Fair enough but are network clients actually meant to use DNSSEC?

You should be validating as close to the point of use as possible.

> Isn't this just an issue for authoritative and recursive DNSSEC resolvers to and down the roots?

If by "resolvers" you mean "local resolution-only servers", then that's common, but arguably bad, practice.

Anyway, using TCP also neuters DNS as a DoS amplifier, at least if you can make it universal enough to avoid downgrade attacks.

Comment by crote 4 days ago

> probably because the people who originally designed DNSSEC (and DNS) couldn't believe that people would be crazy enough to try to keep their DNS records secret

I wonder if it's time to just retire this mechanism. In 2025 you'd have to be crazy to not use encryption with an internet-facing host, which in practice usually means TLS, which means your hostname is already logged in Certificate Transparency logs and trivially enumerated.

Comment by toast0 4 days ago

You can work with wildcard certs and your hostnames need not be enumerated.

Comment by Hizonner 4 days ago

How is giving every internal host a wildcard cert not a cure far worse than the disease in 99 percent of the cases?

Comment by themafia 4 days ago

> couldn't believe that people would be crazy enough to try to keep their DNS records secret

You'd hope people working on DNS would have had broader actual experience with it. There was an ironic lack of paranoia in the DNSSEC people and they seemed overly focused on one peculiar problem, which is, it's easy to spoof DNS responses when you typically only have at most 2**16 - 1024 ports to choose from. They sort of ignored everything else.

> If by "resolvers" you mean "local resolution-only servers", then that's common, but arguably bad, practice.

I haven't kept pace with DNSSEC, but originally, this was the _recommended_ configuration. Has that changed?

> Anyway, using TCP also neuters DNS as a DoS amplifier,

We're ensuring all servers support TCP, but we're not anywhere near dropping UDP.

Comment by Hizonner 4 days ago

They did recommend it at one point. But I don't think that makes it not-bad. It was long enough ago that they might have been worried about crypto performance; I don't know.

Comment by thayne 4 days ago

> Fair enough but are network clients actually meant to use DNSSEC?

I dream of an alternate reality where DNSSEC and DANE had become more ubiquitous, and we didn't have need for CAs to sign TLS certificates[1]. But that requires DNSSEC (or some other cryptographic verification) on the client.

[1]: Or something like that. In that mythical world maybe DNSSEC was also better designed...

Comment by tptacek 4 days ago

Why would that be better?

Comment by 1vuio0pswjnm7 5 hours ago

Comparing the Effects of DNS, DoT, and DoH on Web Performance (2020)

Austin Hounsel, Kevin Borgolte, Paul Schmitt, Jordan Holland, and Nick Feamster

https://arxiv.org/pdf/1907.08089

"On the lossy 4G network, DoT grows increasingly faster than Do53, and DoH begins to close the gap."

"We discovered that current DNS clients do not utilize part of the DNS Internet Standard that could improve client performance and user experience. Unfortunately, the three public recursors we measured violate the standard [27] by not supporting queries with more than one question (QDCOUNT > 1). Cloudflare and Quad9 do not respond, and Google only responds to the first question."

[RFC 1035 (1987) mentions queries with multiple questions in a single packet. AFAIK there have never been any DNS servers that can read and respond to multiple questions in a single packet. But recently there is a practicable workaround, 29 years later: DoT pipelining (multiple question in a single TCP connection). IME, after about 10 years of use, the speed of DoT blows away DoH]

A Comprehensive Study of DNS-over-HTTPS Downgrade Attack (2020)

Qing Huang, Deliang Chang, Zhou Li

https://www.usenix.org/system/files/foci20-paper-huang.pdf

"The fundamental reason is that all browsers enable Opportunistic Privacy profile by default, which allows DoH fall backs to DNS when DoH is not usable."

[DoT/DoH outside the browser generally does not have this problem

As will see Dot/DoH research generally

(a) is browser-centric

(b) assumes the only way to obtain DNS data is by letting a browser retrieve it piecemeal from remote servers automatically

(c) assumes the popular graphical browser is the only application that uses DNS data, and

(d) fails to consider other ways to retrieve and use DNS data that can actually speed up www information retrieval and increase "privacy", but do not necessarily work well with advertising and tracking]

Large Scale Measurement on the Adoption of Encrypted DNS (2021)

Sebastin Garca, Karel Hynek, Dmtrii Vekshin, Tom Čejka

https://arxiv.org/pdf/2107.04436

Organization 2

"The amount of DoH traffic is in average 35 times smaller than DoT."

"DoT Trends. DoT traffic seems to be much larger in the ISP (Organization 2) than in the other organizations. Showing a non-stationary growth in this organization. However, it shows an actually decrease in Organization 2 on mid-January 2021. The absolute number of DoT flows is larger than DoH in all the traffic captures combined."

"DoT traffic seems to be growing in some organizations and has a large volume of traffic considering all absolute numbers. It probably produces more global traffic than DoH."

Can Encrypted DNS Be Fast? (2021)

Austin Hounsel, Paul Schmitt, Kevin Borgolte, and Nick Feamster

https://link.springer.com/content/pdf/10.1007/978-3-030-7258...

"We note that queries for DNS and DoT are sent synchronously, i.e., they must each receive a response before the next query can be sent. On the other hand, DoH queries are sent asynchronously, functionality that is enabled by the underlying HTTP protocol [if it's HTTP/2]"

"Interestingly DoT lookup times are close to those of conventional DNS."

"Interestingly, for X and Y, we find that DoT performs 2.3 ms and 2.6 ms faster than conventional DNS, respectively"

"DoH experienced higher response times than conventional DNS or DoT, although this difference in performance varies significantly across DoH resolvers."

"DoT Can Meet or Beat Conventional DNS Despite High Latencies to Resolvers, Offering Privacy Benefits for no Performance Cost."

"DoH Performs Worse Than Conventional DNS and DoT as Latencies To Resolvers Increase."

[Authors apparently not aware that DoT queries can be sent asynchronously. I do this outside the browser. Nor did authors acknowledge that alternative to using HTTP/2, DoH can be sent synchronously, using HTTP/1.1 pipelining. I do this outside the browser when ports 53, 853 are being redirected by the ISP

"Table 5: Supported HTTP versions by the resolvers found in our Internet scan.

HTTP Version support Number of servers

Only HTTP/1 45 (4.9 %)

Only HTTP/2 86 (9.2 %)

HTTP/1 and HTTP/2 800 (85.9 %)"

Source: https://arxiv.org/pdf/2107.04436]

Domain Name Encryption Is Not Enough: Privacy Leakage via IP-based Website Fingerprinting (2021)

Nguyen Phong Hoang, Arian Akhavan Niaki, Phillipa Gill, and Michalis Polychronakis

https://arxiv.org/pdf/2102.08332

"Our technique exploits the complex structure of most websites, which load resources from several domains besides their primary one."

[What if not using browser. I only retrieve resources from the primary domain, or an "API" domain if that is where the content comes from. Despite "the complex structure of most websites" this works really well for me. The "complex structure of most websites" is mostly ads and tracking. As for "mitigation" why not self-hosting a remote forward proxy]

Comment by avidiax 4 days ago

I would like to see DNS servers require each client to establish one TCP connection to be allowed to use UDP thereafter.

If this were the default on DNS servers, then DNS amplification attacks would be nearly impossible. They rely on spoofing a DNS request from the victim, and amplify because the response may be many times larger than the request. If TCP were required to be used before UDP responses can be received, then the victim would have to be first tricked into making a DNS request over TCP to each public DNS server.

The DNS Cookies standard (RFC 7873) doesn't do much to stop this, since it is impractical to fail queries from non-cookie clients.

DNS over TCP is supposed to be supported, so implementing this will push firewall admins in the right direction (allow both TCP/UDP outbound on 53).

Comment by tptacek 4 days ago

That's an interesting argument, given the whole impetus behind pushing for DoT vs. DoH was to allow network administrators the discretion to block encrypted DNS (by blocking DoT).

Comment by PunchyHamster 4 days ago

quadrupling first query time wouldn't be acceptable. And now server have to keep some state per client so more requirements

Comment by avidiax 4 days ago

If the DNS server is actually local like it's supposed to be, it should have just a few ms ping. Quadrupling that just once is no big deal. The user won't even notice, since every OS does lots of background DNS activity before the user even opens an app or browser.

Saying that 2xRTT is a deal-breaker is like saying TCP in general is a deal breaker.

State per client is pretty simple. Use a bloom filter to decide if a client IP is ok for UDP, and slowly set bits to zero at random to force gradual eviction. With a secret nonce per server, the attacker can't engineer collisions except by controlling lots of IPs. For IPv6, just treat blocks above a certain size (e.g. a /48) as equivalent.

And again, this should be the default. Someone that is seriously trying to run an open resolver should have their own fork of the source code and adjust this as they need. The small-time operators that accidentally make their resolvers open won't notice a bloom filter or a slow initial lookup.

Comment by nly 4 days ago

Why not just split responses over multiple UDP packets?

Spoofing/amplification attacks are already a problem, no?

Comment by 1vuio0pswjnm7 2 days ago

    s/6677/7766/

Comment by sparrish 4 days ago

[March 2016]

Comment by tptacek 4 days ago

Important note because DoT deployment has basically collapsed compared to DoH, which appears to have won the market.