Continuous Deployment of Boost.Beast-based server

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Continuous Deployment of Boost.Beast-based server

Boost - Users mailing list
Imagine I have a WebSocket server, implemented using Boost.Asio and Boost.Beast.

It's deployed and serving clients in production or test environments, but I fixed a bug, checked in a fix, passed CI, and would like to redeploy, but w/o downtime for existing clients.  New clients should use the same address, but be served by the newly deployed fixed server.  While on-ongoing WebSocket connections keep on using the old server.

Is that possible? In-process? Or one must use some kind of "front-end server" redirecting
new traffic (i.e. new WebSocket session and/or HTTP requests)  to separate processes?
(nginx? little GoLang exe? In C++ Boost.Asio based? else?)

If not possible in-process, then how to know when older server is no longer in use, to gracefully stop it?  Asio servers typically run forever, i.e. never run out of work, even when they temporarily have nothing to do, by design.

The goal is to do this on "premise" (i.e. not in the cloud), on a single machine (no containers), and cross-platform (Windows and Linux).

Thanks, --DD

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Continuous Deployment of Boost.Beast-based server

Boost - Users mailing list
The easiest way is to simply replace the old process with the new,
improved version after stopping the listener for the old process.

The io_context is kept alive by handlers registered. Initially, this is
probably only the acceptor listening for incoming connections. Then, as
connections are established, handlers are registered for those too.
When you want to replace the process, you cancel the acceptor, so only
the existing connections keep the io_context alive.

At this point, the listener is closed and the port is free to use by
the new process. When all connections in the old process terminate, the
process then stops (because no more handlers are registered).

There is a small timeframe during which requests might fail, because
the old process stops listening and the new process is not yet
accepting connections. Depending on your platform, there are ways
around this (e.g. SO_REUSEPORT on linux).

On Tue, 2020-04-21 at 13:59 +0200, Dominique Devienne via Boost-users
wrote:

> Imagine I have a WebSocket server, implemented using Boost.Asio and
> Boost.Beast.
>
> It's deployed and serving clients in production or test environments,
> but I
> fixed a bug, checked in a fix, passed CI, and would like to redeploy,
> but
> w/o downtime for existing clients.  New clients should use the same
> address, but be served by the newly deployed fixed server.  While
> on-ongoing WebSocket connections keep on using the old server.
>
> Is that possible? In-process? Or one must use some kind of "front-end
> server" redirecting
> new traffic (i.e. new WebSocket session and/or HTTP requests)  to
> separate
> processes?
> (nginx? little GoLang exe? In C++ Boost.Asio based? else?)
>
> If not possible in-process, then how to know when older server is no
> longer
> in use, to gracefully stop it?  Asio servers typically run forever,
> i.e.
> never run out of work, even when they temporarily have nothing to do,
> by
> design.
>
> The goal is to do this on "premise" (i.e. not in the cloud), on a
> single
> machine (no containers), and cross-platform (Windows and Linux).
>
> Thanks, --DD
> _______________________________________________
> Boost-users mailing list
> [hidden email]
> https://lists.boost.org/mailman/listinfo.cgi/boost-users


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Continuous Deployment of Boost.Beast-based server

Boost - Users mailing list
In reply to this post by Boost - Users mailing list
On 2020-04-21 5:59 a.m., Dominique Devienne via Boost-users wrote:

> It's deployed and serving clients in production or test environments,
> but I fixed a bug, checked in a fix, passed CI, and would like to
> redeploy, but w/o downtime for existing clients.  New clients should
> use the same address, but be served by the newly deployed fixed
> server.  While on-ongoing WebSocket connections keep on using the old
> server.
>
> If not possible in-process, then how to know when older server is no
> longer in use, to gracefully stop it?  Asio servers typically run
> forever, i.e. never run out of work, even when they temporarily have
> nothing to do, by design.
Load balancers are typically involved in this process.
https://www.haproxy.org/, ebpf/xdp, LVS,
http://blog.raymond.burkholder.net/index.php?/archives/632-Load-Balancing-With-DNS,-BGP-and-LVS.html 

>
> The goal is to do this on "premise" (i.e. not in the cloud), on a
> single machine (no containers), and cross-platform (Windows and Linux).
The basic premise is that there is some sort of proxy or services which
tests for 'aliveness', and forwards requests to the appropriate
service.  Typically it is designed to 'drain' traffic from a service to
be stopped, and forward new sessions to an alternate service, and when
no further traffic is forwarding to the old service, it can be stopped,
updated, and restarted, and traffic can then be re-balanced.

And then of course, the question is how do you balance the balancer? 
Usually that is some sort of routing protocol.
>
> Thanks, --DD
>

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Continuous Deployment of Boost.Beast-based server

Boost - Users mailing list
On Tue, Apr 21, 2020 at 4:58 PM Raymond Burkholder via Boost-users <[hidden email]> wrote:
> If not possible in-process, then how to know when older server is no
> longer in use, to gracefully stop it?  Asio servers typically run
> forever, i.e. never run out of work, even when they temporarily have
> nothing to do, by design.
Load balancers are typically involved in this process.
https://www.haproxy.org/, ebpf/xdp, LVS,
http://blog.raymond.burkholder.net/index.php?/archives/632-Load-Balancing-With-DNS,-BGP-and-LVS.html

I see. Found this article on how to confirm HAProxy for WebSocket.

If I understand correctly, that means I must start the servers (old and new) on different ports,
and have HAProxy listen on the "main public port", and manually update HAProxy's config when
redeploying, to start directing traffic to the new one? 

My use case is simpler than Load Balancing, I was hoping for something simpler than
HAProxy, NGinx, Traefik, etc... Which are full blown solution for all sorts of networking tasks.
 
> The goal is to do this on "premise" (i.e. not in the cloud), on a
> single machine (no containers), and cross-platform (Windows and Linux).
The basic premise is that there is some sort of proxy or services which
tests for 'aliveness', and forwards requests to the appropriate
service.  Typically it is designed to 'drain' traffic from a service to
be stopped, and forward new sessions to an alternate service, and when
no further traffic is forwarding to the old service, it can be stopped,
updated, and restarted, and traffic can then be re-balanced.

I didn't equate routing traffic from 1 server to another as Load Balancing,
but I guess it makes sense. My server is already multi-user and multi-threaded,
and not expected to have traffic that justifies a Load Balancer. Other people in
the company are going crazy with Kubernetes and Docker, but I'm trying to keep things
simple and make a good server fast and robust enough to avoid all that complexity.

Except "Hot-Reload" as they say in the Java world does complicate things... --DD

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Continuous Deployment of Boost.Beast-based server

Boost - Users mailing list
In reply to this post by Boost - Users mailing list
On Tue, Apr 21, 2020 at 2:40 PM Martijn Otto via Boost-users <[hidden email]> wrote:
The easiest way is to simply replace the old process with the new,
improved version after stopping the listener for the old process.

The io_context is kept alive by handlers registered. Initially, this is
probably only the acceptor listening for incoming connections. Then, as
connections are established, handlers are registered for those too.
When you want to replace the process, you cancel the acceptor, so only
the existing connections keep the io_context alive.

At this point, the listener is closed and the port is free to use by
the new process. When all connections in the old process terminate, the
process then stops (because no more handlers are registered).

That's interesting, and simple. I guess I could extend one of my HTTP route,
or my existing WebSocket protocol handling, to have the new server communicate
with the old one, notifying him to drop the listener, and stop listening on the main
port, so that the io_context naturally can go out of scope, and the process terminate.
With possibly a grace period for existing clients to disconnect normally, before forceful termination.
 
There is a small timeframe during which requests might fail, because
the old process stops listening and the new process is not yet
accepting connections. Depending on your platform, there are ways
around this (e.g. SO_REUSEPORT on linux).

You mean because the main server port would not be released right away
when the older server stops the io_context's TCP acceptor? Preventing the
new server to start listening on that port right away? --DD

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Continuous Deployment of Boost.Beast-based server

Boost - Users mailing list
On Tue, 2020-04-21 at 18:59 +0200, Dominique Devienne via Boost-users
wrote:

> On Tue, Apr 21, 2020 at 2:40 PM Martijn Otto via Boost-users <
> [hidden email]> wrote:
>
> > The easiest way is to simply replace the old process with the new,
> > improved version after stopping the listener for the old process.
> >
> > The io_context is kept alive by handlers registered. Initially,
> > this is
> > probably only the acceptor listening for incoming connections.
> > Then, as
> > connections are established, handlers are registered for those too.
> > When you want to replace the process, you cancel the acceptor, so
> > only
> > the existing connections keep the io_context alive.
> >
> > At this point, the listener is closed and the port is free to use
> > by
> > the new process. When all connections in the old process terminate,
> > the
> > process then stops (because no more handlers are registered).
> >
>
> That's interesting, and simple. I guess I could extend one of my HTTP
> route,
> or my existing WebSocket protocol handling, to have the new server
> communicate
> with the old one, notifying him to drop the listener, and stop
> listening on
> the main
> port, so that the io_context naturally can go out of scope, and the
> process
> terminate.
> With possibly a grace period for existing clients to disconnect
> normally,
> before forceful termination.
>
>
> > There is a small timeframe during which requests might fail,
> > because
> > the old process stops listening and the new process is not yet
> > accepting connections. Depending on your platform, there are ways
> > around this (e.g. SO_REUSEPORT on linux).
> >
>
> You mean because the main server port would not be released right
> away
> when the older server stops the io_context's TCP acceptor? Preventing
> the
> new server to start listening on that port right away? --DD

It's inherently not synchronized. You have to wait for the old process
to release the port, at which point incoming requests aren't being
handled. Then you have to set up your new acceptor.

As said, there are platform-specific ways around this. I already
mentioned SO_REUSEPORT. UNIX also allows for sharing file descriptors
over unix-domain-sockets. Windows most likely has similar options.

> _______________________________________________
> Boost-users mailing list
> [hidden email]
> https://lists.boost.org/mailman/listinfo.cgi/boost-users


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Continuous Deployment of Boost.Beast-based server

Boost - Users mailing list
In reply to this post by Boost - Users mailing list
On 2020-04-21 10:52 a.m., Dominique Devienne via Boost-users wrote:

I didn't equate routing traffic from 1 server to another as Load Balancing,
but I guess it makes sense. My server is already multi-user and multi-threaded,
and not expected to have traffic that justifies a Load Balancer. Other people in
the company are going crazy with Kubernetes and Docker, but I'm trying to keep things
simple and make a good server fast and robust enough to avoid all that complexity.

Well, your premise seems to be that you want to upgrade a service.  From a programming perspective, you have a few choices:

a) as someone else has mentioned, make use of the SO_REUSEPORT socket option:  write your own little packet forwarder, and encapsulate your code in a reloadable module of some sort, and when you want upgrade, load the module, and migrate the socket.  Taking care of what ever state management is necessary for migrating traffic.

b) use an external service (call it a load balancer or whatever), yes it needs to 'know' the old and new service.  Send a signal that new traffic goes to the new instance.  It should be smart enough to forward packets of old sessions to the old instance.  When no sessions need the old instance, the old instance can be removed.  This can be scaled as necessary.  The least cost could be done with iptables/nftables:  existing connections remain in place until torn down.


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users