(Translated by https://www.hiragana.jp/)
Connecting on the QUIC [LWN.net]
|
|
Subscribe / Log in / New account

Connecting on the QUIC

By Nathan Willis
July 17, 2013

In the never-ending drive to increase the perceived speed of the Internet, improving protocol efficiency is considerably easier than rolling out faster cabling. Google is indeed setting up fiber-optic networks in a handful of cities, but most users are likely to see gains from the company's protocol experimentation, such as the recently-announced QUIC. QUIC stands for "Quick UDP Internet Connection." Like SPDY before it, it is a Google-developed extension of an existing protocol designed to reduce latency. But while SPDY worked at the application layer (modifying HTTP by multiplexing multiple requests over one connection), QUIC works at the transport layer. As the name suggests, it implements a modification of UDP, but that does not tell the whole story. In fact, it is more accurate to think of QUIC as a replacement for TCP. It is intended to optimize connection-oriented Internet applications, such as those that currently use TCP, but in order to do so it needs to sidestep the existing TCP stack.

A June post on the Chromium development blog outlines the design goals for QUIC, starting with a reduction in the number of round trips required to establish a connection. The speed of light being constant, the blog author notes, round trip times (RTTs) are essentially fixed; the only way to decrease the impact of round trips on connection latency is to make fewer of them. However, that turns out to be difficult to do within TCP itself, and TCP implementations are generally provided by the operating system, which makes experimenting with them on real users' machines difficult anyway.

In addition to side-stepping the problems of physics, QUIC is designed to address a number of pain points uncovered in the implementation of SPDY (which ran over TCP). A detailed design document goes into the specifics. First, the delay of a single TCP packet introduces "head of line" blocking in TCP, which undercuts the benefits of SPDY's application-level multiplexing by holding up all of the multiplexed streams. Second, TCP's congestion-handling throttles back the entire TCP connection when there is a lost packet—again, punishing multiple streams in the application layer above.

There are also two issues that stem from running SSL/TLS over TCP: resuming a disconnected session introduces an extra handshake due solely to the protocol design (i.e., not for security reasons, such as issuing new credentials), and the decryption of packets historically needed to be performed in order (which can magnify the effects of a delayed packet). The design document notes that the in-order decryption problem has been largely solved in subsequent revisions, but at the cost of additional bytes per packet. QUIC is designed to implement TLS-like encryption in the same protocol as the transport, thus reducing the overhead of layering TLS over TCP.

Some of these specific issues have been addressed before—including by Google engineers. For example, TCP Fast Open (TFO) reduces round trips when re-connecting to a previously visited server, as does TLS Snap Start. In that sense, QUIC aggregates these approaches and rolls in several new ones, although one reason for doing so is the project's emphasis on a specific use case: TLS-encrypted connections carrying multiple streams to and from a single server, like one often does when using a web application service.

The QUIC team's approach has been to build connection-oriented features on top of UDP, testing the result between QUIC-enabled Chromium builds and a set of (unnamed) Google servers, plus some publicly available server test tools. The specifics of the protocol are still subject to change, but Google promises to publish its results if it finds techniques that result in clear performance improvements.

QUIC trip

Like SPDY, QUIC multiplexes several streams between the same client-server pair over a single connection—thus reducing the connection setup costs, transmission of redundant information, and overhead of maintaining separate sockets and ports. But much of the work on QUIC is focused on reducing the round trips required when establishing a new connection, including the handshake step, encryption setup, and initial requests for data.

QUIC cuts into the round-trip count in several ways. First, when a client initiates a connection, it includes session negotiation information in the initial packet. Servers can publish a static configuration file to host some of this information (such as encryption algorithms supported) for access by all clients, while individual clients provide some of it on their own (such as an initial public encryption key). Since the lifetime of the server's static configuration ought to be very long, requesting it the first time only takes one round-trip in many weeks or months of browsing. Second, when servers respond to an initial connection request, they send back a server certificate, hashes of a certificate chain for the client to verify, and a synchronization cookie. In the best-case scenario, the client can check the validity of the server certificate and start sending data immediately—with only one round-trip expended.

Where the savings really come into play, however, are on subsequent connections to the same server. For repeat connections within a reasonable time frame, the client can assume that the same server certificate will still be valid. The server, however, needs a bit more proof that the computer attempting to reconnect is indeed the same client as before, not an attacker attempting a replay. The client proves its identity by returning the synchronization cookie that the server sent during the initial setup. Again, in the best-case scenario, the client can begin sending data immediately without waiting a round trip (or three) to re-establish the connection.

As of now, the exact makeup of this cookie is not set in stone. It functions much like the cookie in TFO, which was also designed at Google. The cookie's contents are opaque to the client, but the documentation suggests that it should at least include proof that the cookie-holder came from a particular IP address and port at a given time. The server-side logic for cookie lifetimes and under what circumstances to reject or revoke a connection is not mandated. The goal is that by including the cookie in subsequent messages, the client demonstrates its identity to the server without additional authentication steps. In the event that the authentication fails, the system can always fall back to the initial-connection steps. An explicit goal of the protocol design is to better support mobile clients, whose IP addresses may change frequently; even if the zero-round-trip repeat connection does not succeed every time, it still beats initiating both a new TCP and a new TLS connection on each reconnect.

Packets and loss

In addition to its rapid-connection-establishment goals, QUIC implements some mechanisms to cut down on retransmissions. First, the protocol adds packet-level forward-error-correcting (FEC) codes to the unused bytes at the end of streams. Lost data retransmission is the fallback, but the redundant data in the FEC should make it possible to reconstruct lost packets at least a portion of the time. The design document discusses using the bitwise sum of a block of packets as the FEC; the assumption is that a single-packet loss is the most common, and this FEC would allow not only the detection of but the reconstruction of such a lost packet.

Second, QUIC has a set of techniques under review to avoid congestion. By comparison, TCP employs a single technique, congestion windows, which (as mentioned previously) are unforgiving to multiplexed connections. Among the techniques being tested are packet pacing and proactive speculative retransmission.

Packet pacing, quite simply, is scheduling packets to be sent at regular intervals. Efficient pacing requires an ongoing bandwidth estimation, but when it is done right, the QUIC team believes that pacing improves resistance to packet loss caused by intermediate congestion points (such as routers). Proactive speculative retransmission amounts to sending duplicate copies of the most important packets, such as the initial encryption negotiation packets and the FEC packets. Losing either of these packet types triggers a snowball effect, so selectively duplicating them can serve as insurance.

But QUIC is designed to be flexible when it comes to congestion control. In part, the team appears to be testing out several good-sounding ideas to see how well they fare in real-world conditions. It is also helpful for the protocol to be able to adapt in the future, when new techniques or combinations of techniques prove themselves.

QUIC is still very much a work in progress. Then again, it can afford to be. Unlike SPDY, which eventually evolved into HTTP 2.0, the team behind QUIC is up front about the fact that the ideas they implement, if proven successful, would ultimately be destined for inclusion in some future revision of TCP. Building the system on UDP is a purely practical compromise: it allows QUIC's connection-management concepts to be tested on a protocol that is already understood and accepted by the Internet's routing infrastructure. Building an entirely new connection-layer protocol would be almost impossible to test, but piggybacking on UDP at least provides a start.

The project addresses several salient questions in its FAQ, including the speculation that QUIC's goals might have been easily met by running SCTP (Stream Control Transmission Protocol) over DTLS (Datagram Transport Layer Security). SCTP provides the desired multiplexing, while DTLS provides the encryption and authentication. The official answer is that SCTP and DTLS both utilize the old, round-trip–heavy semantics that QUIC is interested in dispensing with. It is possible that other results from the QUIC experiment will make it into later revisions, but without this key feature, the team evidently felt it would not learn what it wanted to. However, as the design document notes: "The eventual protocol may likely strongly resemble SCTP, using encryption strongly resembling DTLS, running atop UDP."

The "experimental" nature of QUIC makes it difficult to predict what outcome will eventually result. For a core Internet protocol, it is a bit unusual for a single company to guide development in house and deploy it in the wild, but then again, Google is in a unique position to do so with real-world testing as part of the equation: the company both runs web servers and produces a web browser client. So long as the testing and the eventual result are open, that approach certainly has its advantages over years of committee-driven debate.


to post comments

Connecting on the QUIC

Posted Jul 17, 2013 14:27 UTC (Wed) by mtaht (guest, #11087) [Link] (10 responses)

I like many of the ideas in QUIC! Especially treating crypto as a first class object baked into the protocol on day 1.

Connecting on the QUIC

Posted Jul 18, 2013 2:45 UTC (Thu) by filteredperception (guest, #5692) [Link] (9 responses)

"I like many of the ideas in QUIC! Especially treating crypto as a first class object baked into the protocol on day 1."

I like the idea of simple layers when it comes to networking protocols myself. I don't personally have the technical depth to argue about how this relates to QUIC, but I've grown fond of the whole- lower layers doing unecrypted stuff as fast as they can, and then the encryption layer on top of that, more easily replacable with an alternate form of future encryption that may be discovered to be more secure. It seems part of the genesis here is a filtered internet that only allows TCP/UDP, thus piggybacking on those unfiltered protocols. For some reason I smell a future where anything non-QUIC is filtered, and the next kludge will need to masquerade as QUIC as QUIC masquerades as UDP. Probably I'm just paranoid or half-ignorant or both...

Connecting on the QUIC

Posted Jul 18, 2013 3:25 UTC (Thu) by foom (subscriber, #14868) [Link] (8 responses)

Well, it's worse, really. Much of the internet allows only HTTP on port 80 (and some middlebox intercepts and modifies it to "help" you), and SSL on port 443 (generally unmolested).

That is also why we have horrible protocols like WebSockets. (which is used "optionally" with SSL under it; but, realistically, SSL is required if you want it to actually work. Without SSL, a transparent proxy or other middlebox will see your "HTTP" and helpfully rewrite it, destroying the WebSockets connection.)

According to the QUIC docs, part of the reason QUIC only functions encrypted is that they observed intermediate boxes breaking their connections by helpfully rewriting some data when it was sent unencrypted. If you encrypt everything end-to-end, there's little that a middle-box can actually do to you.

These days, end-to-end encryption *with endpoint verification* is the only sure way to say "don't mess with my data".

Unfortunately, the design doc says that QUIC "may" not require certificate validation on port 80 (that is, when used to replace "http"). If that's the case, it'd still be possible to make a middlebox which implements QUIC support, decrypting, mutating, and re-encrypting your traffic. Then, there'd still be the possibility of the internals of QUIC being baked into the network, like HTTP is now.

Connecting on the QUIC

Posted Jul 18, 2013 3:46 UTC (Thu) by filteredperception (guest, #5692) [Link] (1 responses)

thanks for the good reply. Next question (presumably with another 'heres how bad it really is' answer)- Why isn't ubiquitous ipsec over ipv6 the right answer to this mess?

Connecting on the QUIC

Posted Jul 21, 2013 12:19 UTC (Sun) by Lennie (subscriber, #49641) [Link]

My guess is no widespread use/support of opportunistic encryption and NAT breaking things (as usual).

Did you notice Google mentioned one of the reasons is because not every OS supports the lower layers they wanted to use ? (or not as well as needed).

Basically any new protocol will need to use the HTTPS port and will need to use encryption. The encryption part might be a good idea. Especially if DNSSEC/DANE or something else that improves the situation with the current CAs system.

Connecting on the QUIC

Posted Jul 18, 2013 9:58 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (5 responses)

The problem with those all or nothing approaches is that it will only increase the rate at which those intermediaries add systematic encryption MITM capabilities (as have already been the case for some years). It would have been much better to try to understand why this rewriting exists, and how to interoperate cleanly with it, rather than launch an encryption arm race.

In the end intermediaries have always the option to drop all this annoying traffic and force use of protocols they can work with.

Connecting on the QUIC

Posted Jul 18, 2013 17:20 UTC (Thu) by filteredperception (guest, #5692) [Link] (4 responses)

sorry, but I'm tired of this tired argument. An encryption arms race is *exactly* what we need. And *right now*, due to the SnowdenCrashPRISM revelations, is the right time for it. If intermediaries want to try dropping encrypted traffic that they can't spy and monetize, then they need to be put in jail for violating Network Neutrality. If it happens in other countries that don't have NN laws, then those citizens will at least get to see the naked authoritarian nature of their subset of the internet.

This idea that because a bunch of big megacorp advertisers have been a large part of the evolution and deployment of the internet, that they get to remove our options of using the internet with end-to-end encryption, and symmetric ability to use *both* clients *and* servers at ordinary residences... well, I think the term "arms race" is not a dirty phrase when it comes to how I think we should deal with those orwellian turds.

Connecting on the QUIC

Posted Jul 18, 2013 17:32 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (3 responses)

In case you've still not understood SnowdenCrashPRISM, it occurred at endpoints, not at intermediary level. And QUIC *is* proposed by a big megacorp advertiser involved in PRISM. Because both the advertisers and the NSA *hate* intermediaries that can help sanitize the data-feed the advertiser-written browser sends to the NSA pickup point.

So, your indignation is noted, but your technical analysis is completely misplaced. This arms race only helps the orweillian turds, both the governmental ones (which are the only ones left with crypto-breaking capabilities) and the private ones (that secure their data stream from privacy watchdog interventions)

Connecting on the QUIC

Posted Jul 18, 2013 20:19 UTC (Thu) by filteredperception (guest, #5692) [Link] (2 responses)

"In case you've still not understood SnowdenCrashPRISM, it occurred at endpoints, not at intermediary level."

Perhaps I have misunderstood. But I imagined it involved direct access to the servers of the companies on the PPT slide. Now, I also see the whole prism/fiber mass collection of ISP data, but we all really really knew that was going on already right? And ditto with pen-register stuff with traditional phone calls.

"And QUIC *is* proposed by a big megacorp advertiser involved in PRISM."

right, but...

"Because both the advertisers and the NSA *hate* intermediaries that can help sanitize the data-feed the advertiser-written browser sends to the NSA pickup point."

I have to disagree. I can't honestly speak for the 'because', but I'll throw my own theories alongside yours. I imagine a lot of honest engineers at Google are working on it because they buy into the line that it's a necessary kludge to work around a filtered internet. My argument is that they should be more 'root hackers' and fix the brokenness of that filtering instead of working around it. Of course any big business is going to choose the less risky workaround path than the holywar path I'm advocating. Se La Vi. Next, I imagine the internet oligopolists or whatever you want to call them are probably happy with it as long as it prevents home server software from invading their core markets (that make money via the servers they host and run on their 'endpoints'). I.e. this convoluted workaround plays into their FUD that home hosted servers are still something they can get away with persecuting, while trying to leverage some of the obvious benefits we would see evolve if home serving was protected as a fundamental right on the internet.

"So, your indignation is noted, but your technical analysis is completely misplaced."

I totally grant that as possible, undoubtedly partially, but you haven't convinced me it's for the reasons you give.

"This arms race only helps the orweillian turds, both the governmental ones (which are the only ones left with crypto-breaking capabilities)"

Exsqueeze me, who exactly am I supposed to feel sorry for that would be losing their crypto-breaking capabilities? The gangs, the corporations?

"and the private ones (that secure their data stream from privacy watchdog interventions)"

That's a twisted way of saying- the pedophiles that get away with their evil. Or I really just don't get what you are referring to at all. But while it is sad that private turds will be able to secure their datastream from others (including cops), I do believe that the state has enough traditional power and money to continue it's fight against actually evil criminals that use encryption. The state can plant bugs in the worst case, and otherwise send paid police investigators and detectives, to like, investigate and detect and shit.

Connecting on the QUIC

Posted Jul 27, 2013 17:31 UTC (Sat) by pcarrier (guest, #65446) [Link] (1 responses)

> Se La Vi.

C'est la vie. French for "That's life".

Connecting on the QUIC

Posted Jul 28, 2013 22:15 UTC (Sun) by filteredperception (guest, #5692) [Link]

cool. In my own broken understanding of things I had internally thought of it as perhaps latin for "So Be It". Thank you confirming my assumption that even if I was technically wrong, it didn't really matter in the big picture. :)

Congestion avoidance

Posted Jul 17, 2013 15:36 UTC (Wed) by flohoff (guest, #32900) [Link] (7 responses)

It seems the engineers of today have enough distance to the late 80ies when networks collapsed because of non existing congestion detection/avoidance.

Implementing a new protocol with new network semantics today at a much larger scale is a very dangerous thing.

Rather than fixing it for a whole bunch off applications by introducing a new ip level protocol we'll go the fast track by abuse a datagram protocol to let the applications decide whats the best way for congestion avoidance.

Yes - it might be we need to squeeze the last ms/RTT out of every connection but this needs to be done VERY carefully.

Congestion avoidance

Posted Jul 17, 2013 16:04 UTC (Wed) by rvfh (guest, #31018) [Link] (2 responses)

> abuse a datagram protocol

UDP is very good at sending packets without caring about what happens to them, so it makes a very good base for any stream-oriented protocol, which just needs to add its own check/ack/retransmit protocol on top.

Yes congestion will need to be dealt with, but that's not really an issue: the same recipes that work for TCP can be re-used.

Congestion avoidance

Posted Jul 17, 2013 19:54 UTC (Wed) by flohoff (guest, #32900) [Link] (1 responses)

> Yes congestion will need to be dealt with, but that's
> not really an issue: the same recipes that work for
> TCP can be re-used.

Every application reimplements its own idea of congestion detection and avoidance? I dont think thats a good idea. A whole lot of work is today
done to grant fairness between streams of data. This imho only works as long as they all play with the same rules.

And reimplementing the same semantics is exactly QUIC trys to avoid.

Ideas like ECN will not work for QUIC as ECN needs bits in the TCP header. Instead of working around the problem of RTT, Bufferbloat and TCP congestion detection push stuff like RED+ECN and AQM.

Yes - its the providers networks, the 802.11 stacks, DSL Routers etc. It'll take longer than simply pushing a new Browser every 4 Weeks out.

Its not a problem with niche applications like bittorrent - put pushing a new protocol with completely untested congestion behaviour out to millions of clients and shifting the majority of traffic from TCP to UDP will present us with a bunch of interesting new problems.

Whats next? Hangout replaces SMTP, DNS over QUIC, G+ instead of IGMP? ProtoBuf in UDP instead of POP/IMAP?

Congestion avoidance

Posted Jul 17, 2013 22:57 UTC (Wed) by khim (subscriber, #9252) [Link]

Its not a problem with niche applications like bittorrent - put pushing a new protocol with completely untested congestion behaviour out to millions of clients and shifting the majority of traffic from TCP to UDP will present us with a bunch of interesting new problems.

You mean "it's fine to experiment with niche protocol which generates almost half of upstream traffic" but "to touch HTML which is barely two times bigger then said niche protocol" is a disaster? How come?

P.S. And yes, when μみゅーTP was first introduced in µTorrent 1.9 it affected many providers in a really bad way, but future upgrades made it much nicer. In fact it affects other applications less then older, TCP-based version.

Congestion avoidance

Posted Jul 17, 2013 16:47 UTC (Wed) by gebi (guest, #59940) [Link]

Most bittorrent traffic today is already on udp because of utorrent (https://github.com/bittorrent/libutp).

Congestion avoidance

Posted Jul 17, 2013 17:35 UTC (Wed) by geofft (subscriber, #59789) [Link] (2 responses)

I think there are two good arguments for not caring about congestion avoidance to the extent that we did in the '80s:

1) A lot of the latency, dropped packets, etc. on most individuals' web-browsing connections tends to be due to something like wifi or mobile networks, not congestion on the local segment (where dozens of workstations are all connected to the same, slow physical link).

2) There are enough applications that already work around TCP's congestion avoidance, by opening a bunch of web connections in parallel, using UDP directly (often for streaming media), etc. While we're certainly not running the entire Internet on a TCP-without-congestion-avoidance, I think there's a decent chance that the apps that care have already worked around it hackishly, enough that them working around it well is not going to put noticeable strain on the network.

Congestion avoidance

Posted Jul 17, 2013 22:41 UTC (Wed) by mtaht (guest, #11087) [Link] (1 responses)

"1) A lot of the latency, dropped packets, etc. on most individuals' web-browsing connections tends to be due to something like wifi or mobile networks, not congestion on the local segment (where dozens of workstations are all connected to the same, slow physical link)."

Um, half-no. wifi and mobile networks currently go to extraordinary lengths to not drop a packet, thus inducing lots of latency.

as for 2) I appreciate your optimism. On the edge of the network we are running TCP almost entirely in slow start (without congestion avoidance), and the results are generally bad.

/me looks for an exit

Congestion avoidance

Posted Jul 18, 2013 20:39 UTC (Thu) by zlynx (guest, #2285) [Link]

The wireless links don't drop packets in a way that is visible to TCP/IP. But many packets are dropped during transmission. It is exactly the same as how a half-duplex Ethernet link drops packets when a transmission conflict happens.

Connecting on the QUIC

Posted Jul 17, 2013 16:50 UTC (Wed) by felixfix (subscriber, #242) [Link] (10 responses)

My home choices are satellite or dialup. Satellite at least has bandwidth, but geez I'd love to use something which would reduce the latency (1.5 second pings).

Connecting on the QUIC

Posted Jul 17, 2013 17:13 UTC (Wed) by dlang (guest, #313) [Link] (8 responses)

no protocol is going to improve the speed of light, and that's what causes your long ping times.

Connecting on the QUIC

Posted Jul 17, 2013 17:32 UTC (Wed) by felixfix (subscriber, #242) [Link]

Didn't say it would improve ping time, and I frankly don't care about ping. But when ping takes 1.5 seconds, nothing else will be any faster, and anything which would reduce the number of round trips is going to help.

Connecting on the QUIC

Posted Jul 17, 2013 18:05 UTC (Wed) by renox (guest, #23785) [Link]

> no protocol is going to improve the speed of light, and that's what causes your long ping times.

I think that he knows that, but if protocols could at least avoid wasting RTT (like TCP does), it would be a progress.

Connecting on the QUIC

Posted Jul 20, 2013 3:09 UTC (Sat) by giraffedata (guest, #1954) [Link] (5 responses)

There must be other causes than the speed of light that cause a 1.5s ping, since it only takes .3s for a photon to travel from the surface of the Earth to a satellite (42,000 km above the center of the Earth) and back.

Likewise, I question the Chromium blog author's motivation of the project to reduce round trips by saying the speed of light is constant. I routinely work across a link which is 2000 km as the crow flies. The constant speed of light limits me to 13 ms round trip, but I see about 100 ms ping time. There are apparently some significant variables involved in the length of a round trip.

Connecting on the QUIC

Posted Jul 20, 2013 4:08 UTC (Sat) by felixfix (subscriber, #242) [Link] (1 responses)

The signal goes up, down, turns around, up, and down again. It's very roughly half a second round trip.

Ping times range from 1.3 to 2.0 seconds. I suspect the satellite and ground station are buffering as much as possible to maximize payload per packet, and that communication between the satellites and modem / ground stations is some very packed non-TCP protocol.

Connecting on the QUIC

Posted Jul 20, 2013 16:14 UTC (Sat) by giraffedata (guest, #1954) [Link]

The signal goes up, down, turns around, up, and down again. It's very roughly half a second round trip.

Ah, I forgot you're pinging something else on Earth, not the satellite itself. So the physical constant C's limitation is, by my calculation, .5-.6 seconds per ping.

Connecting on the QUIC

Posted Jul 20, 2013 7:28 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

You might want to check the real physical path of the fiber. I routinely get 2x the theoretic minimum time for long distances.

Also, on very short distances your latency can be dominated by the fixed switching latency.

Connecting on the QUIC

Posted Jul 20, 2013 10:21 UTC (Sat) by hummassa (guest, #307) [Link]

I find that the formula

ping time = 2 * (distance * c + sum(latency-per-hop))

is a reasonable approximation. Latency-per-hop is usually in the 5-15ms range for stupid or switched hops and anywere in the 25-200ms for routed/firewalled hops. I suppose a special protocol-changing hop could cost even more.

Connecting on the QUIC

Posted Jul 20, 2013 16:06 UTC (Sat) by giraffedata (guest, #1954) [Link]

You might want to check the real physical path of the fiber... Also, on very short distances your latency can be dominated by the fixed switching latency.

Yes, those are two of the variables I'm talking about, showing that the constant speed of light doesn't by itself mean the only way to reduce latency is to reduce the number of trips.

Connecting on the QUIC

Posted Jul 17, 2013 20:14 UTC (Wed) by alanjwylie (subscriber, #4794) [Link]

You canna' change the laws of physics, cap'n

http://revk.www.me.uk/2013/07/you-canna-change-laws-of-ph...

I'd read this blog post by the owner of UK ISP Andrews & Arnold earlier this evening.

<cite>
This is a problem. TCP does not do 20Mb/s over 700ms. Do the sums. It means 1.7MB of transmit buffer at the sender per TCP session. I checked, and the config on a typical linux box was 128kB. You can put it up, and use window scaling (TCP only does 64KB without it) but that would mean reconfiguring every server on the internet to which you wish to communicate.
</cite>

Connecting on the QUIC

Posted Jul 17, 2013 20:07 UTC (Wed) by ibukanov (subscriber, #3942) [Link] (2 responses)

It is not clear how QUIC would stand against denial-of-service attacks. I suppose with a proper cookie the server may not need to maintain any state per client between connections making the whole thing more robust. But if the handshake is CPU intensive, that advantage could be offset.

Connecting on the QUIC

Posted Jul 18, 2013 18:43 UTC (Thu) by jwarnica (subscriber, #27492) [Link] (1 responses)

For the major use case, or at least the major test case, HTTP, servers are already tracking the user extensively. Sure, it means the kernel now is going to have to use up a bunch more memory for low level networking stuff, but the HTTP/app servers already are.

Your random low-end type device with an HTTP interface will continue to use TCP (and small routers and switches will still crash if you try and point two browser at them).

Connecting on the QUIC

Posted Jul 18, 2013 21:26 UTC (Thu) by ibukanov (subscriber, #3942) [Link]

As I understand for purely static content, if the cookie is a crypto-hash of clients ip/port together with some internal string not depending on a client, then the server does not need to keep anything in memory between connections. Compare that with TCP where SYN-ACK requires to maintain the state between connection attempts and that is used for DOS attacks. So QUIC could be more robust as long as hardware can keep with hash calculations.

Connecting on the QUIC

Posted Jul 20, 2013 4:04 UTC (Sat) by giraffedata (guest, #1954) [Link]

TCP's congestion-handling throttles back the entire TCP connection when there is a lost packet—again, punishing multiple streams in the application layer above.

That seems to miss the point of what TCP's congestion handling does. The streams are not being punished - they're having their sending rate limited to what the network is actually capable of delivering. As that capacity has nothing to do with which stream's packet happened to get dropped, there's no reason to consider this entire-connection throttling to be a bad thing.

Perhaps what opponents of TCP are really thinking of is that an application built upon a TCP connection cannot get more than a fair share of the network capacity no matter what it does, whereas a protocol built upon UDP can hog the whole thing, at the expense of TCP users, if it wants.


Copyright © 2013, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds