-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tcpproxy doesn't do a graceful shutdown of the tcp connections #21
Comments
smira
added a commit
to smira/tcpproxy
that referenced
this issue
Oct 14, 2020
Fixes inetaf#21 Signed-off-by: Andrey Smirnov <[email protected]>
smira
added a commit
to siderolabs/tcpproxy
that referenced
this issue
May 22, 2023
Fixes inetaf#21 Signed-off-by: Andrey Smirnov <[email protected]>
cfergeau
pushed a commit
to cfergeau/gvisor-tap-vsock
that referenced
this issue
Aug 20, 2024
This causes a regression in gvproxy when it's used by podman: containers/podman#23616 Reverting inetaf/tcpproxy commit 2862066 is a bit convoluted, as we need to first undo the module name change (inet.af/tcpproxy -> github.com/inetaf/tcpproxy) done in commit 600910c and then a go module `replace` directive to redirect the no-longer existing inet.af/tcpproxy to the commit we want in github.com/inetaf/tcpproxy/ This way, the module name in gvisor-tap-vsock go.mod and in github.com/inetaf/tcpproxy go.mod are the same (inet.af/tcpproxy), and we can use older commits in this repository. It's unclear what's causing the regression, as the commit log/PR description/associated issue don't provide useful details: inetaf/tcpproxy@2862066 The best I could find is: tailscale/tailscale#10070 > The close in the handler sometimes occurs before the buffered data is forwarded. The proxy could be improved to perform a half-close dance, such that it will only mutually close once both halves are closed or both halves error. and inetaf/tcpproxy#21 which seems to be the same issue as inetaf/tcpproxy#38 which is the issue fixed by the commit triggering the regression. What could be happening is that before inetaf/tcpproxy commit 2862066, as soon as one side of the connection was closed, the other half was also closed, while after commit 2862066, the tcpproxy code waits for both halves of the connection to be closed. So maybe we are missing a connection close somewhere in gvproxy's code :-/
cfergeau
added a commit
to cfergeau/gvisor-tap-vsock
that referenced
this issue
Aug 20, 2024
This causes a regression in gvproxy when it's used by podman: containers/podman#23616 Thanks to Maciej Szlosarczyk <[email protected]> for investigating and finding the faulty commit! Reverting inetaf/tcpproxy commit 2862066 is a bit convoluted, as we need to first undo the module name change (inet.af/tcpproxy -> github.com/inetaf/tcpproxy) done in commit 600910c and then a go module `replace` directive to redirect the no-longer existing inet.af/tcpproxy to the commit we want in github.com/inetaf/tcpproxy/ This way, the module name in gvisor-tap-vsock go.mod and in github.com/inetaf/tcpproxy go.mod are the same (inet.af/tcpproxy), and we can use older commits in this repository. It's unclear what's causing the regression, as the commit log/PR description/associated issue don't provide useful details: inetaf/tcpproxy@2862066 The best I could find is: tailscale/tailscale#10070 > The close in the handler sometimes occurs before the buffered data is forwarded. The proxy could be improved to perform a half-close dance, such that it will only mutually close once both halves are closed or both halves error. and inetaf/tcpproxy#21 which seems to be the same issue as inetaf/tcpproxy#38 which is the issue fixed by the commit triggering the regression. What could be happening is that before inetaf/tcpproxy commit 2862066, as soon as one side of the connection was closed, the other half was also closed, while after commit 2862066, the tcpproxy code waits for both halves of the connection to be closed. So maybe we are missing a connection close somewhere in gvproxy's code :-/ Signed-off-by: Christophe Fergeau <[email protected]>
cfergeau
added a commit
to cfergeau/gvisor-tap-vsock
that referenced
this issue
Aug 21, 2024
This causes a regression in gvproxy when it's used by podman: containers/podman#23616 Thanks to Maciej Szlosarczyk <[email protected]> for investigating and finding the faulty commit! Reverting inetaf/tcpproxy commit 2862066 is a bit convoluted, as we need to first undo the module name change (inet.af/tcpproxy -> github.com/inetaf/tcpproxy) done in commit 600910c and then a go module `replace` directive to redirect the no-longer existing inet.af/tcpproxy to the commit we want in github.com/inetaf/tcpproxy/ This way, the module name in gvisor-tap-vsock go.mod and in github.com/inetaf/tcpproxy go.mod are the same (inet.af/tcpproxy), and we can use older commits in this repository. It's unclear what's causing the regression, as the commit log/PR description/associated issue don't provide useful details: inetaf/tcpproxy@2862066 The best I could find is: tailscale/tailscale#10070 > The close in the handler sometimes occurs before the buffered data is forwarded. The proxy could be improved to perform a half-close dance, such that it will only mutually close once both halves are closed or both halves error. and inetaf/tcpproxy#21 which seems to be the same issue as inetaf/tcpproxy#38 which is the issue fixed by the commit triggering the regression. What could be happening is that before inetaf/tcpproxy commit 2862066, as soon as one side of the connection was closed, the other half was also closed, while after commit 2862066, the tcpproxy code waits for both halves of the connection to be closed. So maybe we are missing a connection close somewhere in gvproxy's code :-/ Signed-off-by: Christophe Fergeau <[email protected]> Tested-by: Maciej Szlosarczyk <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
reading through the code it seems to me the teardown of tcp connections is not graceful and might close connections to early.
HandleConn
callsClose()
on both tcp connections when it returns (usingdefer
) andHandleConn returns as soon as a one of the two
proxyCopy
goroutines exits.This will shutdown the second
proxyCopy
goroutine which still might want to transfer data in the opposite direction.I wrote a little contrived rpc scenario where a tcp client closes its write stream to signal an
EOF
and waits for a reply:Client:
Server:
When I connect the client and server directly this works but when putting
tcpproxy
in the middle the client gets an empty response because the tcp connections are closed as soon as the client issuesCloseWrite
. So in this casetcpproxy
causes different behaviour as when connecting directly.Looking at the manpage of socat I can find this section which seems related:
So my question would be if it would be better to propagate the
WriteClose()
instead of just closing the tcp connection.The text was updated successfully, but these errors were encountered: