あどけない話

インターネットに関する技術的な話など

「パケットの設計から見るQUIC」の訂正

QUICは、一年半実装を続けている僕でも全容を把握できているとは言い難いほど大きなプロトコルですが、ある側面をさっと理解するには、n月刊ラムダノート Vol.2, No.1(2020)に西田さんが書かれた「パケットの設計から見るQUIC」がオススメです。ただ、QUICの専門家から見ると、若干不正確な部分がありますので、訂正すべき箇所をまとめておきます。(遅くなって、すみません。) 記事を公開することは西田さんにお知らせしていますが、ここに書いてある内容はあくまで僕の意見です。

最初のページ

  • 「QUIC(Quick UDP Internet Connections)は、インターネットのアーキテクチャ 上で利用可能な、高い信頼性を提供する仕組みとして設計されたトランスポートプロ トコルです。」

IETF で標準化している QUIC は、「Quick UDP Internet Connections」の略語ではありません。何の略語でもありません。

1.3.1 コネクションID

  • 「もしサーバ側が、この宛先コネクション ID を使用したくない場合、より都合が良いコネクション ID を指定した Long ヘッダを返送することも可能です。このときのパケットタイプは Retry となります。」

Retryで変えることもできますが、通常は Initial で変更します。Retry は、アドレス検証のために用います。

1.3.2 パケット番号

  • 「Short ヘッダにはパケット番号と呼ばれるフィールドがあります。」

パケット番号は、Initial や Handshake にもあります。

  • 「たとえば、Offset フィール ドの値が 10000 ならば、この Stream Frame 中のデータはアプリケーションデータの 先頭から 10000 バイトめのデータになります。」

Offsetが0の場合は「1バイトめ」と表現するのが普通でしょうから、「10001 バイトめ」ですね。

  • 「パケット番号 2 のパケットでパケット番号 3 のパケットよりも 大きな Offset を持つアプリケーションデータを格納する」

もう少し大きなパケット番号を使っていれば、再送によって入れ替わったことが自然に表現できていたと思います。

  • 「異なるアルゴリズムで暗号化されたパケットを正しく効率的に復号するために、それぞれに独立したパケット番号空間が必要となります。」

これは efficiently decrypt のように読めますが、効率のためではありません。安全性のために、パケット番号空間が分かれています。

1.4 QUIC コネクションの確立

単なる感想ですが、1-RTTを詳しく説明する前に、0-RTTの話を始めてしまっているので、構成が複雑になっています。0-RTTを1-RTTの後に説明すれば、もっと簡単になったでしょう。

  • 「ちなみに、0-RTT でも 1-RTT でも、コネクションの確立に必要なパケット数に大きな違いはありません。」

0-RTTの場合、PSKが前提です。PSKは、セッションを再開する仕組みですが、そのポイントはサーバ認証を省略することです。つまり、サーバは証明書を送らなくても済みます。証明書は一般的に大きく、QUICの4パケット以上になることもあります。数パケットのコネクション確立で、たとえば4パケットが減ることを「大きな違いはありません」と表現するのは違和感があります。この文は、書かない方がよかったと思います。

1.5.2 ACKフレーム

  • 「このケースでは、 Gap フィールドと ACK Range フィールドを合計で 2 つ使用したので、ACK Range Count フィールドは 2 になります。」

ACK Range Count には、First ACK Range を含めませんので、この場合は「1」です。

Implementing HTTP/3 in Haskell

Mew.org is now speaking HTTP/3 (HTTP/2 over QUIC). If you gain access to the site using Firefox Nightly, the first connection would be HTTP/2 then the following connections should be HTTP/3 led by Alt-Svc:.

f:id:kazu-yamamoto:20200609135341p:plain
Firefox Nightly

This article explains insights which I found through the implementation activities of QUIC and HTTP/3 in Haskell.

HTTP/2 server library

I started implementing QUIC in January 2019. It took four months to reach a toy QUIC client since the negotiation part is really complicated. When I tackled the server side, my brain got befuddled. I have no idea on server architecture.

So, I got back to HTTP/2. As described in HTTP/2 server library in Haskell, I succeed to extract HTTP/2 server library from our HTTP/2 server.

QUIC client and server

After resuming QUIC implementation, I spent countless hours to develop QUIC. And finally, I joined 16th interop test event and 17th interop test event.

As described in Implementation status of QUIC in Haskell, I defined the following API:

runQUICServer :: ServerConfig -> (Connection -> IO ()) -> IO ()

When a QUIC connection is created at the server side, a designated lightweight thread is spawned with the Connection type. This abstraction seems reasonable because Connection hides internal information about a QUIC connection. However, I have no idea on how to abstract QUIC streams at that moment. So, I defined the following APIs temporally:

type StreamID = Int
type Fin = Bool
recvStream :: Connection -> IO (StreamID, ByteString)
sendStream :: Connection -> StreamID -> Fin -> ByteString -> IO ()
shutdownStream :: Connection -> StreamID -> IO ()

These APIs seem awkward since it exposes stream identifiers which applications should not know. Anyway, through this development I got an insight that a lot of code can be shared between client and server.

HTTP/2 client library

Now I wanted to verify that HTTP/2 client library can be achieved by sharing a lot of server code. The result is promising. HTTP/2 library in Haskell now provides both client and server side and implements self-testing.

And importantly, I found beautiful abstractions for HTTP requests and responses. For clients, requests are outgoing data. For servers, responses are also outgoing data. Since response statuses can be expressed as a pseudo :status header, we can define outgoing data as follows:

data OutObj = OutObj {
    outObjHeaders  :: [Header]      -- ^ Accessor for header.
  , outObjBody     :: OutBody       -- ^ Accessor for outObj body.
  , outObjTrailers :: TrailersMaker -- ^ Accessor for trailers maker.
  }

data OutBody = OutBodyNone
             -- | Streaming body takes a write action and a flush action.
             | OutBodyStreaming ((Builder -> IO ()) -> IO () -> IO ())
             | OutBodyBuilder Builder
             | OutBodyFile FileSpec

For the client libarary, Request is just a wrapper data type:

-- | Request from client.
newtype Request = Request OutObj deriving (Show)

Response in the server library is also a wrapper:

-- | Response from server.
newtype Response = Response OutObj deriving (Show)

The same discussion can be done for incoming data thanks to pseudo headers including :method and :path:

type InpBody = IO ByteString

-- | Input object
data InpObj = InpObj {
    inpObjHeaders  :: HeaderTable   -- ^ Accessor for headers.
  , inpObjBodySize :: Maybe Int     -- ^ Accessor for body length specified in c
ontent-length:.
  , inpObjBody     :: InpBody       -- ^ Accessor for body.
  , inpObjTrailers :: IORef (Maybe HeaderTable) -- ^ Accessor for trailers.
  }

Here comes Response for the client library:

-- | Response from server.
newtype Response = Response InpObj deriving (Show)

And Request in the server library is:

-- | Request from client.
newtype Request = Request InpObj deriving (Show)

I shouted about this experience:

HTTP/3 client and server library

Now it was time to implement HTTP/3. Thanks to Request and Response from HTTP/2 library and QUIC library itself, I was able to concentrate on how to manipulate multiple streams. Suddenly, I got an insight about QUIC streams:

Now QUIC library in Haskell provides an abstract data type for streams:

data Stream

Clients can creates a Stream like sockets:

stream :: Connection -> IO Stream
unidirectionalStream :: Connection -> IO Stream

A server get a Stream when a new QUIC connection comes:

acceptStream :: Connection -> IO (Either QUICError Stream) 

Data can be received and sent though Stream:

-- return "" when FIN is received
recvStream :: Stream -> Int -> IO ByteString
sendStream :: Stream -> ByteString -> IO () 
-- Sending FIN
shutdownStream :: Stream -> IO () 

With these APIs, I was able to develop HTTP/3 really fast. In the sense where a lightweight thread is used per stream, programming HTTP/3 is like HTTP/1.1. In the sense where frames are used, programming HTTP/3 is like HTTP/2. I felt that my long career for HTTP/1.1 and HTTP/2 is converged in HTTP/3!

Implementation status of QUIC in Haskell

After implementing HTTP/2 in Haskell and TLS 1.3 in Haskell, I have been working for IETF QUIC. This article explains what I have done in 2019 fiscal year of Japan to report our sponsor, Internet Initiative Japan (IIJ). I have both titles of IIJ and IIJ-II. I'm wearing an IIJ-II hat in this article.

If you wonder why I'm using Haskell to implement network protocols, please give a look at my position paper for NetPL 2017. In short, I love its strong and rich type system and concurrency based on lightweight threads (green threads).

This article mainly describes the server side because it is more challenging than the client side.

APIs

To implement APIs for QUIC servers, I started with the accept-then-fork style inspired by Berkeley socket APIs:

withQUICServer :: ServerConfig -> (QUICServer -> IO ()) -> IO ()
accept :: QUICServer -> IO Connection
close :: Connection -> IO ()

A toy server code to execute server :: Connection -> IO () in a lightweight thread is as follows:

withQUICServer conf $ \qs -> forever $ do
    conn <- accept qs
    void $ forkFinally (server conn) (\_ -> close conn)

It appeared that my test server (mew.org:4433) based on this APIs stacked occasionally. First I suspected buffer overruns and illegal UDP packets. So, I set exception handlers everywhere but no exception was caught. I checked every code of underlying libraries and found a careless bug but it was not the source of this problem. At this moment, I ran out of ideas.

After taking a deep breath, I squeezed the print code everywehere to try to unstarstand what's going on. Printing was less smooth than I expected and I realized that the source of this problem is this API itself. accept was processed in the main thread. So, if accept processing the handshake stacks, everything stacks. This experience led to simpler APIs:

runQUICServer :: ServerConfig -> (Connection -> IO ()) -> IO ()

There is no intermediate data type (QUICServer) anymore. The high order function (the loan pattern in OOP terminology) ensures that the handshake is processed in a spawned thread and Connection is closed finally.

QUIC multiplexes streams in a connection. To send and receive data, the following APIs are provided at this moment:

recvStream :: Connection -> IO (StreamID, ByteString)
sendStream :: Connection -> StreamID -> Fin -> ByteString -> IO ()
shutdownStream :: Connection -> StreamID -> IO ()

You can find the current APIs in Network.QUIC.

TLS handshake

TLS stands for Transport Layer Security. But QUIC uses TLS 1.3 as data structures, not as transport. This means that the QUIC frames on UDP convey the TLS handshake messages without the TLS record layer. Encryption/decryption are carried out by QUIC itself, not by TLS.

To separate the TLS data types from its record layer and transport, I first divided many functions of the server and client code in the TLS library in Haskell and introduced static-function APIs for QUIC. A QUIC client with this APIs succeeded in communicating ngtcp2 server. However, this approach had the following drawbacks:

  • APIs only cover limited cases. To cover all cases including hello retry request, resumption and new session ticket, more APIs should be provided.
  • Modifications are too drastic. When some code are merged into the client and server code in the master branch, I need to do the division again. (How many time did I rework?)

Olivier Chéron, another maintainer of the TLS library, hesitated to merge my modification and suggested me to introduce a flexible record layer. This motivated me to explore another approach based on lightweight threads. My conclusion of the record layer structure is as follows:

data RecordLayer = RecordLayer {
    encodeRecord :: Record Plaintext -> IO (Either TLSError ByteString)
  , sendBytes    :: ByteString -> IO ()
  , recvRecord   :: IO (Either TLSError (Record Plaintext))
  }

Executing a TLS thread with a transparent record layer (no encryption/decryption and no I/O), we can obtain TLS handshake messages itself. The TLS thread can be controlled by the following APIs:

newQUICServer :: ServerParams -> IO ServerController
type ServerController = ServerControl -> IO ServerStatus
data ServerControl =
    PutClientHello ClientHello -- SendRequestRetry, SendServerHello, ServerNeedsMore
  | GetServerFinished -- SendServerFinished
  | PutClientFinished Finished -- SendSessionTicket, ServerNeedsMore
  | ExitServer -- ServerHandshakeDone
data ServerStatus =
    ServerNeedsMore
  | SendRequestRetry ServerHello
  | SendServerHello ServerHello [ExtensionRaw] (Maybe EarlySecretInfo) HandshakeSecretInfo
  | SendServerFinished Finished ApplicationSecretInfo
  | SendSessionTicket SessionTicket
  | ServerHandshakeDone

With this APIs, all cases are covered with a little modification in the client and server code. The stability has been checked with many QUIC implementations. The usage of this APIs can be found in Network.QUIC.Handshake.

One long-standing issue is the timing to terminate the TLS thread for clients. After sending Client Finished to a server, the client waits for New Session Ticket (NST). However, some servers do not send NST.

QUIC draft 25 introduced the HANDSHAKE_DONE frame which is sent from servers to clients. Thanks to this, the main thread of the QUIC client can now terminate the TLS thread when HANDSHAKE_DONE is received. During the inter-operability test for draft 25, I noticed that the ngtcp2 server sends NST in the CRYPTO frame after HANDSHAKE_DONE. So, I changed the Haskell QUIC server to wait for a period after receiving HANDSHAKE_DONE hoping that NST will be also received during the period.

The server architecture

runQUICServer first spawns a Dispatcher thread for each network interface specified in ServerConfig. Each Dispatcher manages one wildcard socket, {UDP, local-addr, local-port, *, *}. After receiving an Initial packet from a wildcard socket, a connected socket, {UDP, local-addr, local-port, remote-addr, remote-port}, is created based on peer's address. For this connected socket, several threads are spawns to maintain Connection as illustrated in Fig 1:

f:id:kazu-yamamoto:20200218140536p:plain:w500
Fig 1: the server architecture

  • Launcher: a thread to make a new Connection and launch user server code (Connection -> IO ()) specified to runQUICServer. recvStream pops incoming data from InputQ and sendStream pushes outgoing data to OutputQ.
  • TLS: a thread for TLS handshake. It gets TLS handshake messages as ServerControl and gives back TLS handshake message as ServerStatus. This thread is terminated when the TLS handshake is completed.
  • Reader: a thread to read data from the connected socket and pass them to Receiver via RecvQ.
  • Receiver: a thread to read data from RecvQ and decrypt them. It passes CRYPTO and STREAM frames to Launcher and processes control frames such as ACK and PING. For instance, when ACK frames are received, corresponding packets are removed from RetransDB.
  • Sender: a thread to read data from outputQ and encrypt-then-send them to the connected socket. It also saves original plain packets to RetransDB.
  • Resender: a thread to pops packets from RetransDB and pushes them to OutputQ repeatedly.

Reader and Receiver

Processing incoming QUIC packets is two-pass: decoding and decryption. This separation made the code drastically simpler. Reader decodes the unprotected part of the header using the following function:

decodeCryptPackets :: ByteString -> IO [CryptPacket]

Note that this function does not take the Connection argument. CryptPacket is defined as follows:

data CryptPacket = CryptPacket Header Crypt
data Header = Initial   Version  CID CID Token
            | RTT0      Version  CID CID
            | Handshake Version  CID CID
            | Short              CID
data Crypt = Crypt {
    cryptPktNumOffset :: Int
  , cryptPacket       :: ByteString
  }

CryptPacket is passed to Receiver via RecvQ:

newtype RecvQ = RecvQ (TQueue CryptPacket)

It is Receiver's responsibility to decrypt the protected part of the header and the encrypted body using the following function:

decryptCrypt :: Connection -> Crypt -> EncryptionLevel -> IO (Maybe Plain)

decryptCrypt takes Connection as an argument since Connection holds secrets. Plain is defined as follows:

data Plain  = Plain  {
    plainFlags        :: Flags Raw
  , plainPacketNumber :: PacketNumber
  , plainFrames       :: [Frame]
  }

Sender and Resender

Unlike the incoming packet processing, the outgoing packet processing is one-pass:

encodePlainPacket :: Connection -> PlainPacket -> Maybe Int -> IO [ByteString]

The third argument controls Padding. If Just n is specified, padding is generated so that the size of the result packet is just n. Otherwise, no padding is used. PlainPacket is defined as follows:

data PlainPacket = PlainPacket Header Plain

Sender keeps PlainPackets to RetransDB while Resender obtains PlainPackets from RetransDB and enqueues them to OutputQ again.

Dispachers

Dispatchers carry out the following jobs:

  • Passing information for new connections
  • Handling retry and version negotiation packets
  • Handling migration and NAT rebiding
  • Combining fragmented Initial packets

Dispatchers decode incoming packets using the following function:

decodePackets :: ByteString -> IO [PacketI]

PacketI is defined as follows:

data PacketI = PacketIV VersionNegotiationPacket
             | PacketIR RetryPacket
             | PacketIC CryptPacket
             | PacketIB BrokenPacket

This data type captures that version negotiation packets and retry packets are not encrypted. VersionNegotiationPacket and RetryPacket should be received by clients only. And servers should receive CryptPacket only. For instance, if a server receives VersionNegotiationPacket, the server ignores it.

New connections

A Dispatcher maintains a dictionary for Connection. The keys are destination connection IDs and values are a pair of Connection and MigrationQ described later.

If a version of Initial CryptPacket is known, it checks the Connection dictionary to see if the destination connection ID is stored. If not stored, it prepares for a new Connection. A new RecvQ is created and the Initial packet are pushed into it. And information to create a Connection including the RecvQ and peer's address/port is queued into so-called accepting queue. The destination connection ID is not registered to the Connection dictionary at this moment to prevent the Initial flooding attack.

The main thread repeatedly spawns Launcher. It accepts the information and tries to make a new Connection. Recall that Reader and Sender use a connected socket while Dispatcher holds a wildcard socket. How we can make a connected socket for a new Connection safely?

Suppose that a wildcard socket, {UDP, 192.0.2.1, 4433, *, *}, exists and peer's address/port is 203.0.113.0:3456. A socket, {UDP, 192.0.2.1, 4433, 203.0.113.0, 3456} should be created without errors nor race condition. My first attempt is as follows:

  1. Create a new UDP socket with SO_REUSEADDR
  2. Bind it to 192.0.2.1:443
  3. Connect it to 203.0.113.0:3456

Unfortunately, BSD variants reject (2). Linux accepts (2) but race condition would happen. Kazuho Oku, one of the quicly maintainers suggested me using an ANY address in (2). The improved process is as follows:

  1. Create a new UDP socket with SO_REUSEADDR
  2. Bind it to *:443
  3. Connect it to 203.0.113.0:3456. This also binds the local address to 192.0.2.1.

This process succeeds even on BSD variants and there is no race conditions on any platforms.

After a connected socket is created and TLS handshake is done through the socket, Launcher registers the Connection to the dictionary.

Retry and version negotiation

When a version in a CryptPacket is unknown (for instance, a grease value), Dispatcher send a version negotiation packet.

If no token is contained in an Initial CryptPacket and Connection is not found in the dictionary but the server configuration requires retry, it sends a Retry packet.

If a valid token is provided, a new connection is created as described in the previous subsection.

Migration or NAT rebiding

When a client moves to a new address/port or the client port is changed due to NAT rebinding, Dispatcher receives Short CryptPackets from its wildcard socket. Their destination connection ID are found in the Connection dictionary. In this case, Dispatcher creates MigrationQ, registers it to the dictionary and spawns Migrator. After that, Dispatcher enqueues this first migration packet and other migration packets, if any, to MigrationQ.

Migrator creates a new connected socket for the new address/port and spawns another Reader. Then Migrator tells the new socket to Sender, asks Sender to send PathChallenge and sleeps until Reader receives PathResponse. Then Migrator reads packets from MigrationQ and enqueues them to RecvQ. After a period, Migrator terminates the old Reader.

f:id:kazu-yamamoto:20200218140547p:plain:w400
Fig 2: the migration arhitecture

Fragmented Initial packets

In normal cases, a client sends one Initial packet when it tries to make a new connection. However, if a client wants to send a large Client Hello message, it is divided into multiple Initial packets.

My old implementation naively handles them. Let's consider the case of two Initial packets. When the first packet arrives, a wildcard socket catches it. A connected socket is created and the packet is acknowledged. Then the second packet is also caught by the wildcard socket. Creating another connected socket for the peer fails due to the same parameters. Since the second packet is not acknowledged, the client resends the second packet. The connected socket captures it this time. So, a new connection can be created.

The advantage of this approach is that Dispatcher does not maintain any information. The disadvantage is that this logic does not work on Linux. To my surprise, connect(2) succeeds even for the same parameters! This results in unexpected behavior.

So, I gave up the zero-cost approach and introduced another dictionary of a fixed size. The keys are original destination connection IDs while the values are RecvQ. An entry is created when the first Initial packet arrives. Then succeeding Initial packets are queued to RecvQ found in the dictionary.

I used Priority Search Queue (PSQ) to make this fixed-size dictionary. Choosing time as priority implements FIFO. The combination of size, deleteMin and insert implements fixed-size.

Final remark

My code is available from the repository on github. You can see the result of 16th inter-operability test event. A lot of works are still left. I will tackle HTTP/3 and QPACK in March 2020.

The contents of this article is just a snapshot and the code would be substantially changed during the development. So, I should write another article to update the contents when the quic library in Haskell is released.

I thank all people, especially IETF QUIC WG guys, who helped me.

Implementing graceful-close in Haskell network library

Closing connections gracefully is an old and new problem in network programming. In the HTTP/1.1 days, this did not get attention since HTTP/1.1 is a synchronous protocol. However, as Niklas Hambüchen concretely and completely explained, HTTP/2 servers should close connections gracefully. This is because HTTP/2 is an asynchronous protocol.

Unfortunately, most HTTP/2 server implementations do not close connections gracefully, hence browsers cannot display pages correctly in some situations. The first half of this article explains the problem and its solution step by step in general. The second half talks about how to implement graceful-close in Haskell network library.

Normal cases of HTTP/1.1

Roughly speaking, synchronous HTTP/1.1 can be implemented as follows:

  • Browser: the loop of writing request and reading response
  • Server: the loop of reading request and writing response

Since HTTP/1.1 uses persistent connections by default, a browser should set the Connection: close header to close the current connection.

When the server received the Connection: close header, it closes the connection by close() after sending its response. Of course, the browser knows that the connection is being closed. So, the browser reads the response until read() returns 0, which means EOF. Then, the browser closes the connection by close().

Error cases of HTTP/1.1

For security reasons, HTTP/1.1 servers close connections. The followings are typical situations:

  • Idle timer is expired
  • The number of requests reaches the limitation

In these cases, an HTTP/1.1. server calls close() resulting in generating TCP FIN.

When the browser tries to write the next request to the same connection, it would be nice to see if the connection is alive. Are there any system calls to check it? If my understanding is correct, there is no such system calls without IO. What the browser can do is just read or write the connection optimistically if it wants to reuse the connection.

The case of TCP FIN

So, what happens if the browser reads or writes the connection which has already received TCP FIN?

write() succeeds. However,since the server socket is already closed, the TCP layer of the browser received TCP FIN, which is not informed to the browser.

read() return 0, EOF, of course.

The case of TCP RST

Another intersecting question is what happens if the browser reads or writes the connection which has already received TCP RST?

write() causes SIGPIPE. If its signal handler ignores it, write() is resumed and returns EPIPE.

read() returns ECONNREST.

Recovering in HTTP/1.1

Suppose that an HTTP/1.1 server closed a connection by close() but the browser tries to send one more request. When the TCP layer of the server received the request, it sends TCP RST back to the browser. The browser tries to read the corresponding response and notices that the server resets the connection. So, the browser can make another new connection and re-send the request to the server.

In this way, recovering in HTTP/1.1 is not so difficult.

Normal cases of HTTP/2

HTTP/2 uses only one TCP connection between a browser and a server. Since HTTP/2 is asynchronous, the browser can send requests at anytime. The server can send back responses in any order. To combine a request and its corresponding response, a unique stream ID is given to the pair. In the following figure, the order of response 1 and response 2 is flipped.

To close the connection, the browser should send GOAWAY. When the HTTP/2 server received GOAWAY, the server should send back GOAWAY. Typical implementations call close() after that.

Error cases of HTTP/2

For security reasons, an HTTP/2 server itself closes a connection by sending GOAWAY. Again, typical implementations call close() after that.

It is likely that the browser sent a request asynchronously and the request reaches to the server after the socket is gone. In this case, as explained earlier, TCP RST is sent back to the browser.

Unfortunately, the TCP RST drops all data to be read in the TCP layer of the browser. This means that when the browser tries to read its response, only ECONNREST is returned. GOAWAY disappears.

GOAWAY contains the last stream ID which the server actually processed. Without receiving GOAWAY, the browser cannot tell the recovering point. In other words, the browser cannot render the target page correctly. This problem actually happens in the real world. And most HTTP/2 server implementations have this problem.

Graceful close

So, what is a solution? The masterpiece book, "UNIX Network Programming Volume 1: The Sockets Networking API (3rd Edition)", W. Richard Stevens et al. suggests the following way.

  • The server should call shutdown(SHUT_WR) to close the sending side but keep the receiving side open. Even if requests reach to the server after shutdown(), TCP RST is not generated.
  • The browser can read GOAWAY in this scenario and send back GOAWAY followed by close().
  • The server should read data until read() returns 0, EOF.
  • The server finally should call close() to deallocate the socket resource.

It is not guaranteed that the browser sends back TCP FIN. So, the server should set time out to read(). One approach is the SO_RCVTIMEO socket option.

Implementations in Haskell

From here, I would like to explain how to implement graceful-close in Haskell network library.

Approach 1: the SO_RCVTIMEO socket option

After reading "UNIX Network Programming", I started with the C-language way but many features are missing in the network library.

  1. To time out reading, SO_RCVTIMEO should be supported in setSocketOption.
  2. Since SO_RCVTIMEO is effective only for blocking sockets, a function to set a non-blocking socket back to blocking is necessary.
  3. Receiving data from blocking sockets without triggering the IO manager is also needed.

I confirmed that this actually works but threw this away. To not block RTS by calling the receiving function of 3, the function should be called via safe FFI. This means that an additional native (OS) thread is consumed. Closing connections should not be that costly. All in all, blocking sockets are not the Haskell way!

Approach 2: the timeout function

Of course, a very easy way is combine the timeout function and the original recv function which may trigger the IO manager. This actually works. But again I threw this away since an additional lightweight thread is consumed in timeout.

Approach 3: the threadDelay function

I finally hit upon the idea of threadDelay. For this approach, a new receiving function is necessary. It uses non-blocking socket and does not trigger the IO manager. The algorithm is as follows:

  • loop until the time out is expired
    • reading data
    • if it returns EAGAIN, call threadDelay with a small delay value. If it returns data, breaks the loop.

The advantage of this approach is availability. This works on all platforms with both threaded and non-threaded RTS. The disadvantage is that the timing of timeout would be inaccurate.

Approach 4: callbacks of the IO/Timer manager

Michael Snoyman suggested to use a pair of callbacks for the IO and Timer managers. First, an MVar is prepared. Then the main code sets a callback to the IO manager asking to put data to the MVar when available. At the same time, the main code also sets a callback to the Timer manager asking to put a time-out signal to the MVar when the timeout is expired. The two callbacks race and the main code will accept the result of the race through the MVar.

This idea is awesome because no resource is wasted. What I was impressed is that he knows the IO/Timer managers better than me, who is one of the developers of the mangers!

Final remark

The Haskell network library version 3.1.1.0 will provide gracefulClose. For threaded-RTS on UNIX where the IO manager is available, approach 4 is taken. For Windows or non-threaded-RTS where the IO manager is not available, approach 3 is taken.

My deep thank goes to Niklas Hambüchen for pointing out this problem, discussing solutions patiently and reviewing my implementations thoroughly. I would like to thank Tamar Christina for helping the development on Windows and Michael Snoyman for suggesting approach 4.

プログラミングHaskell第2版の補足

適宜更新します。

実用的でない例題

「他の言語だと雑多になるけど、Haskellではこんなに優雅なコードになる」という例は大抵実用的ではありません。本書では、以下の例題がそれに当てはまります。

実用的なコードを知りたいなら「Haskellの神話」を読んでください。

紹介されてないデータ型

実用的なプログラムを書く際には String ではなく Text を使います。textパッケージの Data.Text モジュールで定義されています。Text はリストではありませんので、リストプログラミングでは扱えません。専用の API を使って操作します。

非負の整数を表すデータ型は Word です。Data.Wordモジュールで定義されています。8.3節の例は、Wordを使えば安全に定義できます。

newtype Nat = N Word

ちなみに、大きさが決まっている Word8Word16Word32 および Word64 も提供されています。Int も同様です。

なお、IntWordにビット操作をしたい場合は、Data.Bitsを利用します。

newtype

8.4節に、newtype でも再帰型が定義できると書いてありますが、例が載っていません。構成子が一個しかないのに、どうやって再帰するのでしょうか? 8.1節に、以下のようなわざと間違った例があります。

type Tree = (Int,[Tree])

これは newtype を使うと、正しいコードになります。

newtype Tree = Node (Int,[Tree])

一般化してみましょう。

newtype Tree a = Node (a,[Tree a])

組み合わせ関数

9.4節に突然 subsinterleaves および perms が出てきます。どういう仕組みなのか知りたい方は、珠玉のリスト・プログラミングを読んでください。初版では付録で解説していましたが、第2版にはこの付録を付けていません。

プログラミングHaskell第2版を翻訳しました

プログラミングHaskell第2版の翻訳とレビューが完了し、ラムダノートから発売されました。レビューしてくださった5名の方に、改めてお礼を申し上げたいと思います。閉じられたissueは177個ですが、複数の指摘を含むissueもあるので、大雑把に言って250箇所ぐらいは改善されたのだと思います。

初版を買ってない方や、これからHaskellに入門したい人には、手放しでお勧めできます。この記事では、初版を持っているけど、第2版を買うべきか迷っている人に、どこが変わったのか説明します。

書体

コードが数学風の書体から、ブロック体になりました。Haskellに関する論文は、数学風の書体を使う伝統があって初版で採用されていましたが、これが一番不評でした。第2版では、奇を衒らわずに普通になりましたので、安心して読めると思います。

利用するシステム

利用するシステムが、HugsからGHCになりました。初版の翻訳の際にGHCに書き換えようかと迷い、思いとどまったのを後悔していましたが、これですっきりしました。

完全な例題

初版ではパーサーのコードがそのままでは動かないという大問題がありましたが、第2版ではそんなことはありません。

章末問題

章末問題が増えました。初版ですべて問題を解いた人にも、数は多くはないですが、未知の問題が追加されています。

内容

原文の目次を比べてみましょう。

第2版 初版
1 Introduction 1 Introduction
2 First steps 2 First Steps
3 Types and classes 3 Types and Classes
4 Defining functions 4 Defining Functions
5 List comprehensions 5 List Comprehensions
6 Recursive functions 6 Recursive Functions
7 Higher-order functions 7 Higher-Order Functions
8 Declaring types and classes 10 Declaring Types and Classes
9 The countdown problem 11 The Countdown Problem
10 Interactive programming 9 Interactive Programs
11 Unbeatble tic-tac-toe
12 Monads and more
13 Monadic parsing 8 Functional Parsers + 9 Calc
14 Foldables and friends
15 Lazy evaluation 12 Lazy Evaluation
16 Reasoning about programs 13 Reasoning About Programs
17 Calculating compilers
  • 8章以降の構成が大幅に変わっています。パーサーの前にMoandを説明するので、パーサーのコードがそのまま動きます。
  • 章が4つ追加されています。MonadとFoldableには最新の状況が反映されています。詳しくは、訳者前書きを読んでください。

まとめ

前半はGHCに鞍替えしたことなどから差分が多く、後半は差分さえ取れないぐらい変わっています。

HTTP/2 server library in Haskell

I'm trying to develop QUIC in Haskell. In short, QUIC is a fast and reliable transport protocol based on UDP. You can think of it as TCP2. HTTP/2 over QUIC is now called HTTP/3.

Two level dispatchings are necessary for QUIC:

  1. Dispatching QUIC packets to connections
  2. Dispatching QUIC streams in a connection to something (perhaps to lightweight thread workers)

OS kernels are taking care of the first dispatching for TCP. But we have to implement it in a user land for QUIC. I believe that its implementation in Haskell is not so difficult.

But the second dispatching is tough. As I described in Experience Report: Developing High Performance HTTP/2 Server in Haskell and Supporting HTTP/2, I mapped an HTTP/2 stream to a worker of lightweight thread. In this architecture, some other threads are involved to control workers.

Should I reinvent the similar architecture for QUIC? My brain got befuddled.

I finally reached a conclusion: my question was raised because my HTTP/2 server was hard-coded in Warp. If I can extract it as a generic library for HTTP/2 servers, I will be able to reuse it for HTTTP/3.

It took much time but the result is promising. The APIs is so beautiful and functional. This APIs even enable to calculate a checksum in a trailer for streaming response body.

I have already released the http2 library version 2.0.0 with Network.HTTP2.Server module. Warp will switch from the original implementation to this server library soon.

The APIs are inspired by WAI but are independent from it. I hope that other HTTP engines can adopt the HTTP/2 server library easily.