asio: cancelling a named pipe client

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

asio: cancelling a named pipe client

Boost - Users mailing list

I have a client which connects to a named pipe as follows:


CreateFile(pipeName.c_str(),GENERIC_READ | GENERIC_WRITE, 0, nullptr, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, nullptr);


The result of this call is assigned to a stream_handle. I want that io service shuts down orderly by run() returning due to running out of work. To achieve this, I post a lambda that effectively calls cancel() on the handle (hidden inside Kill()):


    _io.post([this]() {
        for (auto& w : _workers) w->Kill();
    });


However, cancel() has no effect: the callback for async_read continues to return with error code of zero and data read from the pipe. For reference, this is the read call with its handler:


    template<typename T>
    void read(T& ioh, const asio::mutable_buffer& buf)
    {
        ioh.async_read_some(asio::buffer(buf), [this, self = shared_from_this()](const boost::system::error_code& ec, size_t sz) {
            if (ec == boost::system::errc::operation_canceled)
                return;
            if (ec)
                QUINE_THROW(Error(self->_ownersId, ec, "async_read_some"));
            self->ReadCompleted(sz);
        });
    }


ReadCompleted() processes the received data and loops by calling read() again.


If I call close() the callback gets an error code and everything works out correctly, *except* that I get an error code [invalid handle] that gets logged as an error (though it's not).


Am I correct in assuming that cancel() is an apparent noop in this case because of the race-condition where an I/O request completes successfully before cancel is invoked?


If so, can you suggest a more elegant way  (i.e., a way that doesn't induce a hard error) of exiting a loop as described here? Setting a member variable instead of calling cancel?


Given the existence of the race-condition, what are use-cases for cancel? How to use it correctly, if at all possible?


-- Stian



_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list
On 27/01/2018 06:50, Stian Zeljko Vrba wrote:
> Am I correct in assuming that cancel() is an apparent noop in this case
> because of the race-condition where an I/O request completes
> successfully before cancel is invoked?

Most likely yes.  It just internally calls through to the OS API, which
will have nothing to do if there isn't an outstanding OS request at that
exact moment.

ASIO can't internally consider this a permanent failure because there
may be cases where you wanted to cancel a single operation and then
start a new one that you expect to continue normally.

> If so, can you suggest a more elegant way  (i.e., a way that doesn't
> induce a hard error) of exiting a loop as described here? Setting a
> member variable instead of calling cancel?

Probably the best thing to do is to do both, in this order:

   1. Set a member variable that tells your completion handler code to
not start a new operation.
   2. Call cancel() to abort any pending operation.

This covers both cases; if you miss the pending operation then the
member will tell your completion handler to not start a new one and just
return, and if you don't then the cancellation will generate an
operation_aborted which you can either silently ignore and return
immediately or fall through to the code that checks the member.

There's still a race between when you check the member and when the
operation actually starts -- but that's why you need to post your
cancellation request to the same io_service (and use explicit strands if
you have more than one worker thread).

Omitting the cancel isn't recommended as this would prolong shutdown in
the case that the remote end isn't constantly transmitting.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list

Ok, thanks for the suggestion.


As a side-note, cancellation/shutdown seems to be the least thought-through feature in ASIO..


- it's an all or nothing thing, i.e., it can't be used to cancel individual I/O requests

- it doesn't work on completed requests


.. EVEN THOUGH the io service has a list of outstanding requests (waiting for completion) and pending (completed) handlers.


I could also just call stop() on the io_service, but when it's started again, all the "old" handlers will be called as well. The only complete solution probably being stopping and deleting io_service, and recreating it in the next "go".


-- Stian


From: Boost-users <[hidden email]> on behalf of Gavin Lambert via Boost-users <[hidden email]>
Sent: Tuesday, January 30, 2018 4:27:46 AM
To: [hidden email]
Cc: Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client
 
On 27/01/2018 06:50, Stian Zeljko Vrba wrote:
> Am I correct in assuming that cancel() is an apparent noop in this case
> because of the race-condition where an I/O request completes
> successfully before cancel is invoked?

Most likely yes.  It just internally calls through to the OS API, which
will have nothing to do if there isn't an outstanding OS request at that
exact moment.

ASIO can't internally consider this a permanent failure because there
may be cases where you wanted to cancel a single operation and then
start a new one that you expect to continue normally.

> If so, can you suggest a more elegant way  (i.e., a way that doesn't
> induce a hard error) of exiting a loop as described here? Setting a
> member variable instead of calling cancel?

Probably the best thing to do is to do both, in this order:

   1. Set a member variable that tells your completion handler code to
not start a new operation.
   2. Call cancel() to abort any pending operation.

This covers both cases; if you miss the pending operation then the
member will tell your completion handler to not start a new one and just
return, and if you don't then the cancellation will generate an
operation_aborted which you can either silently ignore and return
immediately or fall through to the code that checks the member.

There's still a race between when you check the member and when the
operation actually starts -- but that's why you need to post your
cancellation request to the same io_service (and use explicit strands if
you have more than one worker thread).

Omitting the cancel isn't recommended as this would prolong shutdown in
the case that the remote end isn't constantly transmitting.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list
Hi Stian,

Some thoughts from an ASIO veteran and fan:

- it's an all or nothing thing, i.e., it can't be used to cancel individual I/O requests


It is not valid to have more than one outstanding async read on an asio io object at a time*. cancel() will cancel the current async operation that is in progress on that object if there is one.

You have to remember that notifications come through the io_service and therefore “happen” for the client later than they actually “happened” in reality. If you want to correlate every completion handler invocation with every read call, then you might want to consider assigning an “invocation id” to each read operation and passing that to the closure (handler).

* clarification: deadline_timers may have more than one outstanding wait, and an io object may have an an outstanding read and write at the same time.

> it doesn't work on completed requests

Because the handler has already been passed to the io_service for invocation. From the socket’s point of view, you’ve been notified. Sending a cancel before the execution of the handler can only meaningfully result in a NOP because it’s a crossing case. Think of cancel() as meaning, “please cancel the last request if it’s not already completed."
 
> EVEN THOUGH the io service has a list of outstanding requests (waiting for completion) and pending (completed) handlers

Anything posted to the io_service will happen. It’s a done deal. The io_service is (amongst other things) a multi-producer, multi-consumer queue with some clever thread marshalling. This is important. Handlers often hold lifetime-extending shared-pointers to the source of their invocations. The handler’s invocation is where the resource can be optionally released.

I could also just call stop() on the io_service

This would indicate a design error. Think of the io_service as “the global main loop” of your program. When writing a windows or OSX program, no-one “stops” the windows message loop. Messages have been posted. They must be dealt with. This is the nature of the reactor-pattern World.

R


On 30 Jan 2018, at 08:26, Stian Zeljko Vrba via Boost-users <[hidden email]> wrote:

Ok, thanks for the suggestion.

As a side-note, cancellation/shutdown seems to be the least thought-through feature in ASIO..


- it's an all or nothing thing, i.e., it can't be used to cancel individual I/O requests

- it doesn't work on completed requests

.. EVEN THOUGH the io service has a list of outstanding requests (waiting for completion) and pending (completed) handlers.

I could also just call stop() on the io_service, but when it's started again, all the "old" handlers will be called as well. The only complete solution probably being stopping and deleting io_service, and recreating it in the next "go".


-- Stian

From: Boost-users <[hidden email]> on behalf of Gavin Lambert via Boost-users <[hidden email]>
Sent: Tuesday, January 30, 2018 4:27:46 AM
To: [hidden email]
Cc: Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client
 
On 27/01/2018 06:50, Stian Zeljko Vrba wrote:
> Am I correct in assuming that cancel() is an apparent noop in this case 
> because of the race-condition where an I/O request completes 
> successfully before cancel is invoked?

Most likely yes.  It just internally calls through to the OS API, which 
will have nothing to do if there isn't an outstanding OS request at that 
exact moment.

ASIO can't internally consider this a permanent failure because there 
may be cases where you wanted to cancel a single operation and then 
start a new one that you expect to continue normally.

> If so, can you suggest a more elegant way  (i.e., a way that doesn't 
> induce a hard error) of exiting a loop as described here? Setting a 
> member variable instead of calling cancel?

Probably the best thing to do is to do both, in this order:

   1. Set a member variable that tells your completion handler code to 
not start a new operation.
   2. Call cancel() to abort any pending operation.

This covers both cases; if you miss the pending operation then the 
member will tell your completion handler to not start a new one and just 
return, and if you don't then the cancellation will generate an 
operation_aborted which you can either silently ignore and return 
immediately or fall through to the code that checks the member.

There's still a race between when you check the member and when the 
operation actually starts -- but that's why you need to post your 
cancellation request to the same io_service (and use explicit strands if 
you have more than one worker thread).

Omitting the cancel isn't recommended as this would prolong shutdown in 
the case that the remote end isn't constantly transmitting.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list

Hi, thanks for your thoughts.


It is not valid to have more than one outstanding async read on an asio io object at a time


Although I'm not doing this, this restriction is mentioned in the documentation only for composite operations (free-standing async_read) but not for members on objects (async_read_some()).

Because the handler has already been passed to the io_service for invocation.... Think of cancel() as meaning, “please cancel the last request if it’s not already completed." ... Anything posted to the io_service will happen.

So asio leaves handling of difficult (edge-)cases to all users instead of offering a user-friendly opt-in solution, such as: each i/o object tracks posted, but not-yet-executed handlers. When cancel on the object is called, it would traverse the list and update error codes. (Meaningful only if the operation completed successfully.) Handlers don't need to be immutable, and given that handlers take error_code by const reference (implying that error_code must already be stored somewhere deep in asio), I suspect they aren't. 

(Unrelated: individual operations cannot be canceled (e.g., read, but not write); this is a glaring design omission from my POV. I needed that in another project.)

Handlers often hold lifetime-extending shared-pointers to the source of their invocations.

Yes, that's another gotcha when you have outstanding both reads and writes. Esp. tricky to discover and fix when only, say, read, fails due to broken pipe, but there's no data to send so that also write() fails in the forseeable future. Then io_service just sits and hangs there waiting for the write handler to return...

This would indicate a design error. Think of the io_service as “the global main loop” of your program.

I have a program where configuration cannot be changed dynamically. I have to stop the components and recreate them with the new configuration object. The amount of time I spent figuring out how to get clean shutdown/cancellation working (and I'm probably still not there yet!), leads me to accept "design error" as a valid solution to the problem: stop(), delete io_service and recreate it when needed. Ugly and inelegant solution that fixes absolutely all problems (hey, that's what's engineering all about!), including "stale" handlers being invoked upon restart.

.. I guess this semi-rant can be summarized as: asio needs documentation on best practices/patterns/guidelines for life-time management of handlers.

(Right now, in my design, the "controlling" object has weak_ptrs to "worker object", while workers keep themselves alive through a shared_ptr. Each worker also has a unique ID because different instances may be recreated at the same addresses... which leads me to the following question.)

Does weak_ptr protect against an analogue of the "ABA" problem: Say I have a permanent weak_ptr to an alive shared_ptr. Then the shared_ptr gets destroyed. Then another shared_ptr of the same type gets created, but both the object and the control block get the same addresses as the previous instances (not unthinkable with caching allocators). How will lock() on the existing weak_ptr behave? Intuitively, it should return null, but will it? Does the standard say anything about this?

-- Stian


From: Richard Hodges <[hidden email]>
Sent: Tuesday, January 30, 2018 8:45:26 AM
To: [hidden email]
Cc: Stian Zeljko Vrba; Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client
 
Hi Stian,

Some thoughts from an ASIO veteran and fan:

- it's an all or nothing thing, i.e., it can't be used to cancel individual I/O requests


It is not valid to have more than one outstanding async read on an asio io object at a time*. cancel() will cancel the current async operation that is in progress on that object if there is one.

You have to remember that notifications come through the io_service and therefore “happen” for the client later than they actually “happened” in reality. If you want to correlate every completion handler invocation with every read call, then you might want to consider assigning an “invocation id” to each read operation and passing that to the closure (handler).

* clarification: deadline_timers may have more than one outstanding wait, and an io object may have an an outstanding read and write at the same time.

> it doesn't work on completed requests

Because the handler has already been passed to the io_service for invocation. From the socket’s point of view, you’ve been notified. Sending a cancel before the execution of the handler can only meaningfully result in a NOP because it’s a crossing case. Think of cancel() as meaning, “please cancel the last request if it’s not already completed."
 
> EVEN THOUGH the io service has a list of outstanding requests (waiting for completion) and pending (completed) handlers

Anything posted to the io_service will happen. It’s a done deal. The io_service is (amongst other things) a multi-producer, multi-consumer queue with some clever thread marshalling. This is important. Handlers often hold lifetime-extending shared-pointers to the source of their invocations. The handler’s invocation is where the resource can be optionally released.

I could also just call stop() on the io_service

This would indicate a design error. Think of the io_service as “the global main loop” of your program. When writing a windows or OSX program, no-one “stops” the windows message loop. Messages have been posted. They must be dealt with. This is the nature of the reactor-pattern World.

R


On 30 Jan 2018, at 08:26, Stian Zeljko Vrba via Boost-users <[hidden email]> wrote:

Ok, thanks for the suggestion.

As a side-note, cancellation/shutdown seems to be the least thought-through feature in ASIO..


- it's an all or nothing thing, i.e., it can't be used to cancel individual I/O requests

- it doesn't work on completed requests

.. EVEN THOUGH the io service has a list of outstanding requests (waiting for completion) and pending (completed) handlers.

I could also just call stop() on the io_service, but when it's started again, all the "old" handlers will be called as well. The only complete solution probably being stopping and deleting io_service, and recreating it in the next "go".


-- Stian

From: Boost-users <[hidden email]> on behalf of Gavin Lambert via Boost-users <[hidden email]>
Sent: Tuesday, January 30, 2018 4:27:46 AM
To: [hidden email]
Cc: Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client
 
On 27/01/2018 06:50, Stian Zeljko Vrba wrote:
> Am I correct in assuming that cancel() is an apparent noop in this case 
> because of the race-condition where an I/O request completes 
> successfully before cancel is invoked?

Most likely yes.  It just internally calls through to the OS API, which 
will have nothing to do if there isn't an outstanding OS request at that 
exact moment.

ASIO can't internally consider this a permanent failure because there 
may be cases where you wanted to cancel a single operation and then 
start a new one that you expect to continue normally.

> If so, can you suggest a more elegant way  (i.e., a way that doesn't 
> induce a hard error) of exiting a loop as described here? Setting a 
> member variable instead of calling cancel?

Probably the best thing to do is to do both, in this order:

   1. Set a member variable that tells your completion handler code to 
not start a new operation.
   2. Call cancel() to abort any pending operation.

This covers both cases; if you miss the pending operation then the 
member will tell your completion handler to not start a new one and just 
return, and if you don't then the cancellation will generate an 
operation_aborted which you can either silently ignore and return 
immediately or fall through to the code that checks the member.

There's still a race between when you check the member and when the 
operation actually starts -- but that's why you need to post your 
cancellation request to the same io_service (and use explicit strands if 
you have more than one worker thread).

Omitting the cancel isn't recommended as this would prolong shutdown in 
the case that the remote end isn't constantly transmitting.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list
On 31/01/2018 03:03, Stian Zeljko Vrba wrote:
>  > It is not valid to have more than one outstanding async read on an
> asio io object at a time
>
> Although I'm not doing this, this restriction is mentioned in the
> documentation only for composite operations (free-standing async_read)
> but not for members on objects (async_read_some()).

There is actually no technical restriction from having multiple pending
reads -- even for standalone async_read.  You can do it, and it will
behave "correctly".

The trouble is that this correct behaviour is not *useful* behaviour.
If you have multiple pending reads on the same object then it means the
OS is free to scatter the data some here, some there, and it becomes
impossible to make sense of the data arrival order, which renders stream
sockets fairly useless.  (It *is* something you can sensibly do with
message-based sockets, though -- but it's still unusual because
application-layer protocols usually aren't written to react well to
things being processed out of expected order.)

Multiple writes are the same -- there's no technical reason why you
can't, but usually it's nonsensical to actually do it since the data can
end up interleaved in strange and unexpected ways at the other end.

So the limit of one outstanding read and one outstanding write at a time
is a practical one, not a technical one.

> So asio leaves handling of difficult (edge-)cases to all users instead
> of offering a user-friendly opt-in solution, such as: each i/o object
> tracks posted, but not-yet-executed handlers. When cancel on the object
> is called, it would traverse the list and update error codes.

There is no list to traverse.  There can't be, due to the nature of MPMC
queues.

Besides, if an operation did actually execute correctly, it's usually
more useful to report that success even if a cancel occurred later --
after all, the bytes were actually read or transmitted, and it may be
important to know that so that you know what you need to send next.

> (Unrelated: individual operations cannot be canceled (e.g., read, but
> not write); this is a glaring design omission from my POV. I needed that
> in another project.)

This is generally an OS limitation.

It's also very standard in concurrent programming that requests to
cancel are just that: requests.  The request is free to be ignored if
the task has already completed, even if the callback hasn't been invoked
yet, and especially if the callback might already be executing.

It's simply not possible to do it any other way.

> Yes, that's another gotcha when you have outstanding both reads and
> writes. Esp. tricky to discover and fix when only, say, read, fails due
> to broken pipe, but there's no data to send so that also write() fails
> in the forseeable future. Then io_service just sits and hangs there
> waiting for the write handler to return...

If there's no data to write then you don't have a pending write to begin
with.  Write operations are only started when you actually have data to
send, and typically complete very quickly (with the exception of pipes
that are full) -- typically only read (and listen) operations are left
pending for long periods while waiting for incoming data.

> .. I guess this semi-rant can be summarized as: asio needs documentation
> on best practices/patterns/guidelines for life-time management of handlers.

It has examples.

> Does weak_ptr protect against an analogue of the "ABA" problem: Say I
> have a permanent weak_ptr to an alive shared_ptr. Then the shared_ptr
> gets destroyed. Then another shared_ptr of the same type gets created,
> but both the object and the control block get the same addresses as the
> previous instances (not unthinkable with caching allocators). How will
> lock() on the existing weak_ptr behave? Intuitively, it should return
> null, but will it? Does the standard say anything about this?

Yes, it will reliably return nullptr.  It is not possible for a new
object to have the same control block address as some prior object as
long as any weak_ptrs to the original object exist.

Essentially both the control block and the object are refcounted; a
shared_ptr holds a count of both the object and the control block, while
a weak_ptr holds a count of the control block alone.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list
> I guess this semi-rant can be summarized as: asio needs documentation on best practices/patterns/guidelines for life-time management of handlers.

With this I agree. It has taken me quite some time to become familiar enough with asio to be able to write correct services for it. I had to read the (sparse) documentation a number of times, and made numerous mistakes.



On 30 January 2018 at 22:08, Gavin Lambert via Boost-users <[hidden email]> wrote:
On 31/01/2018 03:03, Stian Zeljko Vrba wrote:
 > It is not valid to have more than one outstanding async read on an asio io object at a time

Although I'm not doing this, this restriction is mentioned in the documentation only for composite operations (free-standing async_read) but not for members on objects (async_read_some()).

There is actually no technical restriction from having multiple pending reads -- even for standalone async_read.  You can do it, and it will behave "correctly".

The trouble is that this correct behaviour is not *useful* behaviour. If you have multiple pending reads on the same object then it means the OS is free to scatter the data some here, some there, and it becomes impossible to make sense of the data arrival order, which renders stream sockets fairly useless.  (It *is* something you can sensibly do with message-based sockets, though -- but it's still unusual because application-layer protocols usually aren't written to react well to things being processed out of expected order.)

Multiple writes are the same -- there's no technical reason why you can't, but usually it's nonsensical to actually do it since the data can end up interleaved in strange and unexpected ways at the other end.

So the limit of one outstanding read and one outstanding write at a time is a practical one, not a technical one.

So asio leaves handling of difficult (edge-)cases to all users instead of offering a user-friendly opt-in solution, such as: each i/o object tracks posted, but not-yet-executed handlers. When cancel on the object is called, it would traverse the list and update error codes.

There is no list to traverse.  There can't be, due to the nature of MPMC queues.

Besides, if an operation did actually execute correctly, it's usually more useful to report that success even if a cancel occurred later -- after all, the bytes were actually read or transmitted, and it may be important to know that so that you know what you need to send next.

(Unrelated: individual operations cannot be canceled (e.g., read, but not write); this is a glaring design omission from my POV. I needed that in another project.)

This is generally an OS limitation.

It's also very standard in concurrent programming that requests to cancel are just that: requests.  The request is free to be ignored if the task has already completed, even if the callback hasn't been invoked yet, and especially if the callback might already be executing.

It's simply not possible to do it any other way.

Yes, that's another gotcha when you have outstanding both reads and writes. Esp. tricky to discover and fix when only, say, read, fails due to broken pipe, but there's no data to send so that also write() fails in the forseeable future. Then io_service just sits and hangs there waiting for the write handler to return...

If there's no data to write then you don't have a pending write to begin with.  Write operations are only started when you actually have data to send, and typically complete very quickly (with the exception of pipes that are full) -- typically only read (and listen) operations are left pending for long periods while waiting for incoming data.

.. I guess this semi-rant can be summarized as: asio needs documentation on best practices/patterns/guidelines for life-time management of handlers.

It has examples.

Does weak_ptr protect against an analogue of the "ABA" problem: Say I have a permanent weak_ptr to an alive shared_ptr. Then the shared_ptr gets destroyed. Then another shared_ptr of the same type gets created, but both the object and the control block get the same addresses as the previous instances (not unthinkable with caching allocators). How will lock() on the existing weak_ptr behave? Intuitively, it should return null, but will it? Does the standard say anything about this?

Yes, it will reliably return nullptr.  It is not possible for a new object to have the same control block address as some prior object as long as any weak_ptrs to the original object exist.

Essentially both the control block and the object are refcounted; a shared_ptr holds a count of both the object and the control block, while a weak_ptr holds a count of the control block alone.


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list
In reply to this post by Boost - Users mailing list

It has examples.


And this what the C++11 HTTP server example does to abort a read/write loop for the client:


void connection::stop()
{
  socket_.close();
}


Exactly what I wanted to avoid. Why?

The mundane reason is that I want to have clean event logs. This is how the example(s) you're referring to "handle" errors (an excerpt):

...
else if (ec != boost::asio::error::operation_aborted)
        {
          connection_manager_.stop(shared_from_this());
        }
...

The principled reason ist that I'd like that error code != 0 really means that a hard error happened, instead of "maybe error, maybe my program wants to abort the handler loop". With the example "solution" I'd be de-facto repurposing errc::invalid_handle (or whatever it's called) to mean the same as errc::operation_aborted, which  I don't like at all. If I have to explain why: because I want all the help I can get from the  OS to diagnose my own mess-ups.

Yes, I can use additional state in addition to error code, but... then I end up with  *two* things to check in each handler. (Is it an error? Did it happen because I made it happen?)

From: Boost-users <[hidden email]> on behalf of Gavin Lambert via Boost-users <[hidden email]>
Sent: Tuesday, January 30, 2018 11:08:54 PM
To: [hidden email]
Cc: Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client
 
On 31/01/2018 03:03, Stian Zeljko Vrba wrote:
>  > It is not valid to have more than one outstanding async read on an
> asio io object at a time
>
> Although I'm not doing this, this restriction is mentioned in the
> documentation only for composite operations (free-standing async_read)
> but not for members on objects (async_read_some()).

There is actually no technical restriction from having multiple pending
reads -- even for standalone async_read.  You can do it, and it will
behave "correctly".

The trouble is that this correct behaviour is not *useful* behaviour.
If you have multiple pending reads on the same object then it means the
OS is free to scatter the data some here, some there, and it becomes
impossible to make sense of the data arrival order, which renders stream
sockets fairly useless.  (It *is* something you can sensibly do with
message-based sockets, though -- but it's still unusual because
application-layer protocols usually aren't written to react well to
things being processed out of expected order.)

Multiple writes are the same -- there's no technical reason why you
can't, but usually it's nonsensical to actually do it since the data can
end up interleaved in strange and unexpected ways at the other end.

So the limit of one outstanding read and one outstanding write at a time
is a practical one, not a technical one.

> So asio leaves handling of difficult (edge-)cases to all users instead
> of offering a user-friendly opt-in solution, such as: each i/o object
> tracks posted, but not-yet-executed handlers. When cancel on the object
> is called, it would traverse the list and update error codes.

There is no list to traverse.  There can't be, due to the nature of MPMC
queues.

Besides, if an operation did actually execute correctly, it's usually
more useful to report that success even if a cancel occurred later --
after all, the bytes were actually read or transmitted, and it may be
important to know that so that you know what you need to send next.

> (Unrelated: individual operations cannot be canceled (e.g., read, but
> not write); this is a glaring design omission from my POV. I needed that
> in another project.)

This is generally an OS limitation.

It's also very standard in concurrent programming that requests to
cancel are just that: requests.  The request is free to be ignored if
the task has already completed, even if the callback hasn't been invoked
yet, and especially if the callback might already be executing.

It's simply not possible to do it any other way.

> Yes, that's another gotcha when you have outstanding both reads and
> writes. Esp. tricky to discover and fix when only, say, read, fails due
> to broken pipe, but there's no data to send so that also write() fails
> in the forseeable future. Then io_service just sits and hangs there
> waiting for the write handler to return...

If there's no data to write then you don't have a pending write to begin
with.  Write operations are only started when you actually have data to
send, and typically complete very quickly (with the exception of pipes
that are full) -- typically only read (and listen) operations are left
pending for long periods while waiting for incoming data.

> .. I guess this semi-rant can be summarized as: asio needs documentation
> on best practices/patterns/guidelines for life-time management of handlers.

It has examples.

> Does weak_ptr protect against an analogue of the "ABA" problem: Say I
> have a permanent weak_ptr to an alive shared_ptr. Then the shared_ptr
> gets destroyed. Then another shared_ptr of the same type gets created,
> but both the object and the control block get the same addresses as the
> previous instances (not unthinkable with caching allocators). How will
> lock() on the existing weak_ptr behave? Intuitively, it should return
> null, but will it? Does the standard say anything about this?

Yes, it will reliably return nullptr.  It is not possible for a new
object to have the same control block address as some prior object as
long as any weak_ptrs to the original object exist.

Essentially both the control block and the object are refcounted; a
shared_ptr holds a count of both the object and the control block, while
a weak_ptr holds a count of the control block alone.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list
On 31/01/2018 20:00, Stian Zeljko Vrba wrote:

> And this what the C++11 HTTP server example does to abort a read/write
> loop for the client:
>
> void connection::stop()
> {
>    socket_.close();
> }
>
>
> Exactly what I wanted to avoid. Why?
>
> The mundane reason is that I want to have clean event logs. This is how
> the example(s) you're referring to "handle" errors (an excerpt):
>
> ...
> else if (ec != boost::asio::error::operation_aborted)
>          {
>            connection_manager_.stop(shared_from_this());
>          }
> ...
>
> The principled reason ist that I'd like that error code != 0 really
> means that a hard error happened, instead of "maybe error, maybe my
> program wants to abort the handler loop". With the example "solution"
> I'd be de-facto repurposing errc::invalid_handle (or whatever it's
> called) to mean the same as errc::operation_aborted, which  I don't like
> at all. If I have to explain why: because I want all the help I can get
> from the  OS to diagnose my own mess-ups.

If you close the socket while an operation is pending, it will trigger
operation_aborted, which should not be logged (as this is an "expected"
error).  It will also *not start another operation* (for any error).

So you will never see an invalid_handle error as a result of that as
long as some operation is pending at all times.  Thus invalid_handle is
always an actual error.


What about the case when a read has completed?  This is not a problem as
long as you make sure that you post the (method that calls) close() to
the io_service, ensuring that it is on the same strand as the read (or
any other operations) -- which is automatic if you only have one worker
thread.

Why?  Because the completion handler of the read operation will always
start another read operation if it succeeds, and never do so if it
fails.  And posted tasks are always executed in the order posted (when
there is one worker thread, or they are posted through the same strand).

So either the read is pending, and the close/cancel will abort it and
give you an operation_aborted; or the read is completed, and the read's
completion handler will execute and start a new read before the
close/cancel can execute.


What you *don't* want to do is to execute the close/cancel in some other
context, as then you'll get weirder behaviour without some other
mechanism (such as an extra flag) to track that you're trying to shut down.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list

> Why?  Because the completion handler of the read operation will always start another read operation if it succeeds, and never do so if it fails.  And posted tasks are always executed in the order posted (when there is one worker thread, or they are posted through the same strand).

The documentation for async_read_some contains the following ambiguous remark about immediate completion: "Regardless of whether the asynchronous operation completes immediately or not, the handler will not be invoked from within this function. Invocation of the handler will be performed in a manner equivalent to using boost::asio::io_context::post(). "

So is the following possible:

  1. Another read operation is started, but that one completes immediately because data is ready on the pipe. I.e., async_read_some immediately posts the completion handler.
  2. close() has nothing to cancel, but closes the socket.
  3. The handler posted in 1 is executed, it goes to initiate another read, and that read finds the pipe closed, returning errc::invalid_handle.

> So either the read is pending, and the close/cancel will abort it and give you an operation_aborted; or the read is completed, and the read's completion handler will execute and start a new read before the close/cancel can execute.

Even if async_read_some does *not* post the handler immediately but just initiates the I/O, this I/O can complete immediately and cannot be canceled on the OS-level. Thus attempted cancellation by close() will be a noop, the read handler will get an OK status and proceed to use the closed handle...

According to the comment in the code in the link below, this is a real possibility: https://msdn.microsoft.com/en-us/library/windows/desktop/aa363789(v=vs.85).aspx



From: Boost-users <[hidden email]> on behalf of Gavin Lambert via Boost-users <[hidden email]>
Sent: Wednesday, January 31, 2018 8:14:28 AM
To: [hidden email]
Cc: Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client
 
On 31/01/2018 20:00, Stian Zeljko Vrba wrote:
> And this what the C++11 HTTP server example does to abort a read/write
> loop for the client:
>
> void connection::stop()
> {
>    socket_.close();
> }
>
>
> Exactly what I wanted to avoid. Why?
>
> The mundane reason is that I want to have clean event logs. This is how
> the example(s) you're referring to "handle" errors (an excerpt):
>
> ...
> else if (ec != boost::asio::error::operation_aborted)
>          {
>            connection_manager_.stop(shared_from_this());
>          }
> ...
>
> The principled reason ist that I'd like that error code != 0 really
> means that a hard error happened, instead of "maybe error, maybe my
> program wants to abort the handler loop". With the example "solution"
> I'd be de-facto repurposing errc::invalid_handle (or whatever it's
> called) to mean the same as errc::operation_aborted, which  I don't like
> at all. If I have to explain why: because I want all the help I can get
> from the  OS to diagnose my own mess-ups.

If you close the socket while an operation is pending, it will trigger
operation_aborted, which should not be logged (as this is an "expected"
error).  It will also *not start another operation* (for any error).

So you will never see an invalid_handle error as a result of that as
long as some operation is pending at all times.  Thus invalid_handle is
always an actual error.


What about the case when a read has completed?  This is not a problem as
long as you make sure that you post the (method that calls) close() to
the io_service, ensuring that it is on the same strand as the read (or
any other operations) -- which is automatic if you only have one worker
thread.

Why?  Because the completion handler of the read operation will always
start another read operation if it succeeds, and never do so if it
fails.  And posted tasks are always executed in the order posted (when
there is one worker thread, or they are posted through the same strand).

So either the read is pending, and the close/cancel will abort it and
give you an operation_aborted; or the read is completed, and the read's
completion handler will execute and start a new read before the
close/cancel can execute.


What you *don't* want to do is to execute the close/cancel in some other
context, as then you'll get weirder behaviour without some other
mechanism (such as an extra flag) to track that you're trying to shut down.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list
On 1/02/2018 05:39, Stian Zeljko Vrba wrote:
> So is the following possible:
>
>  1. Another read operation is started, but that one completes
>     immediately because data is ready on the pipe. I.e., async_read_some
>     immediately posts the completion handler.
>  2. close() has nothing to cancel, but closes the socket.
>  3. The handler posted in 1 is executed, it goes to initiate another
>     read, and that read finds the pipe closed, returning
>     errc::invalid_handle.

No, because you don't call close() directly, you post() a method that
calls close(), which cannot execute before #3 unless either the
operation is still pending or you have already started a new operation
that is now pending.

> Even if async_read_some does *not* post the handler immediately but just
> initiates the I/O, this I/O can complete immediately and cannot be
> canceled on the OS-level. Thus attempted cancellation by close() will be
> a noop, the read handler will get an OK status and proceed to use the
> closed handle...

No, because the close hasn't actually happened yet, it's still sitting
in the queue.

The queue itself is OS-managed, so the moment the operation completes it
will push the completion handler to the queue.  It doesn't require
intervention from ASIO code.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list

Hi,


thanks for your patience. This is still unclear.


>  post() a method that calls close()


That's what I meant. I'll try to be more precise. QC below stands for the io_service's queue contents.

  1. io_service dequeues  and executes a completed read handler. This handler starts a new async read operation... QC: [] (empty). Pending: async_read.
  2. The program enqueues to io_service a lambda that calls close. QC: [ {close} ] Pending: async_read.
  3. In the mean-time (say, during the enqueue), the started async_read operation completed successfully in parallel because it didn't block at all. QC: [ {close} {ReadHandler} ] Pending: none.
  4. io_service dequeues {close}, nothing to cancel, the handle is closed. QC: [ {ReadHandler} ]
  5. io_service dequeues {ReadHandler}, which initiates a new read from the closed handle. QC: []

This assumes that asynchronous operations and notifications are, well, asynchronous. IOW, there is a time window (en-/dequeueing and execution of {close}) during which async_read can complete and end up non-cancellable, so {ReadHandler} is enqueued with success status.

What part of the puzzle am I missing?


From: Boost-users <[hidden email]> on behalf of Gavin Lambert via Boost-users <[hidden email]>
Sent: Wednesday, January 31, 2018 11:14 PM
To: [hidden email]
Cc: Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client
 
On 1/02/2018 05:39, Stian Zeljko Vrba wrote:
> So is the following possible:
>
>  1. Another read operation is started, but that one completes
>     immediately because data is ready on the pipe. I.e., async_read_some
>     immediately posts the completion handler.
>  2. close() has nothing to cancel, but closes the socket.
>  3. The handler posted in 1 is executed, it goes to initiate another
>     read, and that read finds the pipe closed, returning
>     errc::invalid_handle.

No, because you don't call close() directly, you post() a method that
calls close(), which cannot execute before #3 unless either the
operation is still pending or you have already started a new operation
that is now pending.

> Even if async_read_some does *not* post the handler immediately but just
> initiates the I/O, this I/O can complete immediately and cannot be
> canceled on the OS-level. Thus attempted cancellation by close() will be
> a noop, the read handler will get an OK status and proceed to use the
> closed handle...

No, because the close hasn't actually happened yet, it's still sitting
in the queue.

The queue itself is OS-managed, so the moment the operation completes it
will push the completion handler to the queue.  It doesn't require
intervention from ASIO code.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list
On 1/02/2018 19:36, Stian Zeljko Vrba wrote:

> That's what I meant. I'll try to be more precise. QC below stands for
> the io_service's queue contents.
>
>  1.
>     io_service dequeues  and executes a completed read handler. This
>     handler starts a new async read operation... QC: [] (empty).
>     Pending: async_read.
>  2.
>     The program enqueues to io_service a lambda that calls close. QC: [
>     {close} ] Pending: async_read.
>  3.
>     In the mean-time (say, during the enqueue), the started async_read
>     operation completed successfully in parallel because it didn't block
>     at all. QC: [ {close} {ReadHandler} ] Pending: none.
>  4.
>     io_service dequeues {close}, nothing to cancel, the handle is
>     closed. QC: [ {ReadHandler} ]
>  5.
>     io_service dequeues {ReadHandler}, which initiates a new read from
>     the closed handle. QC: []
>
>
> This assumes that asynchronous operations and notifications are, well,
> asynchronous. IOW, there is a time window (en-/dequeueing and execution
> of {close}) during which async_read can complete and end up
> non-cancellable, so {ReadHandler} is enqueued with success status.
>
> What part of the puzzle am I missing?

In #5, before ReadHandler is executed the socket is already closed.
Thus after processing the successfully read data (or discarding it, if
you prefer), you can check socket.is_open() before starting a fresh read.

This will of course be false in the sequence above, at which point you
just return instead of starting the read, and then once the handler
exits the objects will fall out of existence (if you're using the
shared_ptr lifetime pattern).  If there's no other work to do at that
point then the io_service will also exit naturally.

If the close ended up queued after ReadHandler, then is_open() will
still be true, you will start a new read operation, and then either the
above occurs (if the read actually completes before it starts executing
the close), or the close does find something to abort and this will
enqueue ReadHandler with operation_aborted.

There is no race between checking is_open() and starting the next read
because both the ReadHandler and the close are occurring in the same
strand, so can't happen concurrently.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list

> Thus after processing the successfully read data (or discarding it, if you prefer), you can check socket.is_open() before starting a fresh read.


OK, everything's sorted out now. I like the is_open() suggestion since I don't have to introduce another state variable to stop the loop. 


Thanks again!


-- Stian


From: Boost-users <[hidden email]> on behalf of Gavin Lambert via Boost-users <[hidden email]>
Sent: Thursday, February 1, 2018 7:52:25 AM
To: [hidden email]
Cc: Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client
 
On 1/02/2018 19:36, Stian Zeljko Vrba wrote:
> That's what I meant. I'll try to be more precise. QC below stands for
> the io_service's queue contents.
>
>  1.
>     io_service dequeues  and executes a completed read handler. This
>     handler starts a new async read operation... QC: [] (empty).
>     Pending: async_read.
>  2.
>     The program enqueues to io_service a lambda that calls close. QC: [
>     {close} ] Pending: async_read.
>  3.
>     In the mean-time (say, during the enqueue), the started async_read
>     operation completed successfully in parallel because it didn't block
>     at all. QC: [ {close} {ReadHandler} ] Pending: none.
>  4.
>     io_service dequeues {close}, nothing to cancel, the handle is
>     closed. QC: [ {ReadHandler} ]
>  5.
>     io_service dequeues {ReadHandler}, which initiates a new read from
>     the closed handle. QC: []
>
>
> This assumes that asynchronous operations and notifications are, well,
> asynchronous. IOW, there is a time window (en-/dequeueing and execution
> of {close}) during which async_read can complete and end up
> non-cancellable, so {ReadHandler} is enqueued with success status.
>
> What part of the puzzle am I missing?

In #5, before ReadHandler is executed the socket is already closed.
Thus after processing the successfully read data (or discarding it, if
you prefer), you can check socket.is_open() before starting a fresh read.

This will of course be false in the sequence above, at which point you
just return instead of starting the read, and then once the handler
exits the objects will fall out of existence (if you're using the
shared_ptr lifetime pattern).  If there's no other work to do at that
point then the io_service will also exit naturally.

If the close ended up queued after ReadHandler, then is_open() will
still be true, you will start a new read operation, and then either the
above occurs (if the read actually completes before it starts executing
the close), or the close does find something to abort and this will
enqueue ReadHandler with operation_aborted.

There is no race between checking is_open() and starting the next read
because both the ReadHandler and the close are occurring in the same
strand, so can't happen concurrently.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list

Hi,

 

I have followed this thread for the past days and I learned a lot about using boost::asio. Thanks a lot.

 

The main points, that I learned, are:

·         Don’t call close(), but post() it.

·         Check is_open() before starting new async operations.

 

Btw, would it be ok to call close() in an event handler?

 

I am using boost::asio for about a year in several projects, and In each of them I had different bugs in my code, some were really hard to debug.. There are so many things to regard.

 

I tried to use boost::coroutine, once, but I had that much trouble, that for the next project I went back to using regular handlers. At first, I used async_read_some(), later I discovered async_read() and async_read_until(). I have got the feeling of approaching the ‘optimal’ solution step by step, but I also think, there is still some road ahead.

 

Is there somewhere in the internet or in the boost documentation a kind of best practices for using boost::asio?

 

73, Mario

 

 

Von: Boost-users [mailto:[hidden email]] Im Auftrag von Stian Zeljko Vrba via Boost-users
Gesendet: D
onnerstag, 1. Februar 2018 08:51
An: [hidden email]
Cc: Stian Zeljko Vrba; Gavin Lambert
Betreff: Re: [Boost-users] asio: cancelling a named pipe client

 

> Thus after processing the successfully read data (or discarding it, if you prefer), you can check socket.is_open() before starting a fresh read.

 

OK, everything's sorted out now. I like the is_open() suggestion since I don't have to introduce another state variable to stop the loop. 

 

Thanks again!

 

-- Stian


From: Boost-users <[hidden email]> on behalf of Gavin Lambert via Boost-users <[hidden email]>
Sent: Thursday, February 1, 2018 7:52:25 AM
To:
[hidden email]
Cc: Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client

 

On 1/02/2018 19:36, Stian Zeljko Vrba wrote:
> That's what I meant. I'll try to be more precise. QC below stands for
> the io_service's queue contents.
>
>  1.
>     io_service dequeues  and executes a completed read handler. This
>     handler starts a new async read operation... QC: [] (empty).
>     Pending: async_read.
>  2.
>     The program enqueues to io_service a lambda that calls close. QC: [
>     {close} ] Pending: async_read.
>  3.
>     In the mean-time (say, during the enqueue), the started async_read
>     operation completed successfully in parallel because it didn't block
>     at all. QC: [ {close} {ReadHandler} ] Pending: none.
>  4.
>     io_service dequeues {close}, nothing to cancel, the handle is
>     closed. QC: [ {ReadHandler} ]
>  5.
>     io_service dequeues {ReadHandler}, which initiates a new read from
>     the closed handle. QC: []
>
>
> This assumes that asynchronous operations and notifications are, well,
> asynchronous. IOW, there is a time window (en-/dequeueing and execution
> of {close}) during which async_read can complete and end up
> non-cancellable, so {ReadHandler} is enqueued with success status.
>
> What part of the puzzle am I missing?

In #5, before ReadHandler is executed the socket is already closed.
Thus after processing the successfully read data (or discarding it, if
you prefer), you can check socket.is_open() before starting a fresh read.

This will of course be false in the sequence above, at which point you
just return instead of starting the read, and then once the handler
exits the objects will fall out of existence (if you're using the
shared_ptr lifetime pattern).  If there's no other work to do at that
point then the io_service will also exit naturally.

If the close ended up queued after ReadHandler, then is_open() will
still be true, you will start a new read operation, and then either the
above occurs (if the read actually completes before it starts executing
the close), or the close does find something to abort and this will
enqueue ReadHandler with operation_aborted.

There is no race between checking is_open() and starting the next read
because both the ReadHandler and the close are occurring in the same
strand, so can't happen concurrently.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list

Hi,


The main points, that I learned, are:
·
Don’t call close(), but post() it.
·
Check is_open() before starting new async operations.

Btw, would it be ok to call close() in an event handler?




From: Boost-users <[hidden email]> on behalf of Klebsch, Mario via Boost-users <[hidden email]>
Sent: Thursday, February 1, 2018 1:19 PM
To: [hidden email]
Cc: Klebsch, Mario
Subject: Re: [Boost-users] asio: cancelling a named pipe client
 

Hi,

 

I have followed this thread for the past days and I learned a lot about using boost::asio. Thanks a lot.

 

The main points, that I learned, are:

·         Don’t call close(), but post() it.

·         Check is_open() before starting new async operations.

 

Btw, would it be ok to call close() in an event handler?

 

I am using boost::asio for about a year in several projects, and In each of them I had different bugs in my code, some were really hard to debug.. There are so many things to regard.

 

I tried to use boost::coroutine, once, but I had that much trouble, that for the next project I went back to using regular handlers. At first, I used async_read_some(), later I discovered async_read() and async_read_until(). I have got the feeling of approaching the ‘optimal’ solution step by step, but I also think, there is still some road ahead.

 

Is there somewhere in the internet or in the boost documentation a kind of best practices for using boost::asio?

 

73, Mario

 

 

Von: Boost-users [mailto:[hidden email]] Im Auftrag von Stian Zeljko Vrba via Boost-users
Gesendet: D
onnerstag, 1. Februar 2018 08:51
An: [hidden email]
Cc: Stian Zeljko Vrba; Gavin Lambert
Betreff: Re: [Boost-users] asio: cancelling a named pipe client

 

> Thus after processing the successfully read data (or discarding it, if you prefer), you can check socket.is_open() before starting a fresh read.

 

OK, everything's sorted out now. I like the is_open() suggestion since I don't have to introduce another state variable to stop the loop. 

 

Thanks again!

 

-- Stian


From: Boost-users <[hidden email]> on behalf of Gavin Lambert via Boost-users <[hidden email]>
Sent: Thursday, February 1, 2018 7:52:25 AM
To:
[hidden email]
Cc: Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client

 

On 1/02/2018 19:36, Stian Zeljko Vrba wrote:
> That's what I meant. I'll try to be more precise. QC below stands for
> the io_service's queue contents.
>
>  1.
>     io_service dequeues  and executes a completed read handler. This
>     handler starts a new async read operation... QC: [] (empty).
>     Pending: async_read.
>  2.
>     The program enqueues to io_service a lambda that calls close. QC: [
>     {close} ] Pending: async_read.
>  3.
>     In the mean-time (say, during the enqueue), the started async_read
>     operation completed successfully in parallel because it didn't block
>     at all. QC: [ {close} {ReadHandler} ] Pending: none.
>  4.
>     io_service dequeues {close}, nothing to cancel, the handle is
>     closed. QC: [ {ReadHandler} ]
>  5.
>     io_service dequeues {ReadHandler}, which initiates a new read from
>     the closed handle. QC: []
>
>
> This assumes that asynchronous operations and notifications are, well,
> asynchronous. IOW, there is a time window (en-/dequeueing and execution
> of {close}) during which async_read can complete and end up
> non-cancellable, so {ReadHandler} is enqueued with success status.
>
> What part of the puzzle am I missing?

In #5, before ReadHandler is executed the socket is already closed.
Thus after processing the successfully read data (or discarding it, if
you prefer), you can check socket.is_open() before starting a fresh read.

This will of course be false in the sequence above, at which point you
just return instead of starting the read, and then once the handler
exits the objects will fall out of existence (if you're using the
shared_ptr lifetime pattern).  If there's no other work to do at that
point then the io_service will also exit naturally.

If the close ended up queued after ReadHandler, then is_open() will
still be true, you will start a new read operation, and then either the
above occurs (if the read actually completes before it starts executing
the close), or the close does find something to abort and this will
enqueue ReadHandler with operation_aborted.

There is no race between checking is_open() and starting the next read
because both the ReadHandler and the close are occurring in the same
strand, so can't happen concurrently.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: asio: cancelling a named pipe client

Boost - Users mailing list
In reply to this post by Boost - Users mailing list

Argh, I apologize for the previous reply, I pressed a wrong key-combo.



The main points, that I learned, are:
·
Don’t call close(), but post() it.
·
Check is_open() before starting new async operations.

Those *seem* to be the rules for arranging a clean shutdown...

Btw, would it be ok to call close() in an event handler?

Yes, event handlers are called "as if" they were posted to the main loop.


-- Stian



From: Boost-users <[hidden email]> on behalf of Klebsch, Mario via Boost-users <[hidden email]>
Sent: Thursday, February 1, 2018 1:19:50 PM
To: [hidden email]
Cc: Klebsch, Mario
Subject: Re: [Boost-users] asio: cancelling a named pipe client
 

Hi,

 

I have followed this thread for the past days and I learned a lot about using boost::asio. Thanks a lot.

 

The main points, that I learned, are:

·         Don’t call close(), but post() it.

·         Check is_open() before starting new async operations.

 

Btw, would it be ok to call close() in an event handler?

 

I am using boost::asio for about a year in several projects, and In each of them I had different bugs in my code, some were really hard to debug.. There are so many things to regard.

 

I tried to use boost::coroutine, once, but I had that much trouble, that for the next project I went back to using regular handlers. At first, I used async_read_some(), later I discovered async_read() and async_read_until(). I have got the feeling of approaching the ‘optimal’ solution step by step, but I also think, there is still some road ahead.

 

Is there somewhere in the internet or in the boost documentation a kind of best practices for using boost::asio?

 

73, Mario

 

 

Von: Boost-users [mailto:[hidden email]] Im Auftrag von Stian Zeljko Vrba via Boost-users
Gesendet: D
onnerstag, 1. Februar 2018 08:51
An: [hidden email]
Cc: Stian Zeljko Vrba; Gavin Lambert
Betreff: Re: [Boost-users] asio: cancelling a named pipe client

 

> Thus after processing the successfully read data (or discarding it, if you prefer), you can check socket.is_open() before starting a fresh read.

 

OK, everything's sorted out now. I like the is_open() suggestion since I don't have to introduce another state variable to stop the loop. 

 

Thanks again!

 

-- Stian


From: Boost-users <[hidden email]> on behalf of Gavin Lambert via Boost-users <[hidden email]>
Sent: Thursday, February 1, 2018 7:52:25 AM
To:
[hidden email]
Cc: Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client

 

On 1/02/2018 19:36, Stian Zeljko Vrba wrote:
> That's what I meant. I'll try to be more precise. QC below stands for
> the io_service's queue contents.
>
>  1.
>     io_service dequeues  and executes a completed read handler. This
>     handler starts a new async read operation... QC: [] (empty).
>     Pending: async_read.
>  2.
>     The program enqueues to io_service a lambda that calls close. QC: [
>     {close} ] Pending: async_read.
>  3.
>     In the mean-time (say, during the enqueue), the started async_read
>     operation completed successfully in parallel because it didn't block
>     at all. QC: [ {close} {ReadHandler} ] Pending: none.
>  4.
>     io_service dequeues {close}, nothing to cancel, the handle is
>     closed. QC: [ {ReadHandler} ]
>  5.
>     io_service dequeues {ReadHandler}, which initiates a new read from
>     the closed handle. QC: []
>
>
> This assumes that asynchronous operations and notifications are, well,
> asynchronous. IOW, there is a time window (en-/dequeueing and execution
> of {close}) during which async_read can complete and end up
> non-cancellable, so {ReadHandler} is enqueued with success status.
>
> What part of the puzzle am I missing?

In #5, before ReadHandler is executed the socket is already closed.
Thus after processing the successfully read data (or discarding it, if
you prefer), you can check socket.is_open() before starting a fresh read.

This will of course be false in the sequence above, at which point you
just return instead of starting the read, and then once the handler
exits the objects will fall out of existence (if you're using the
shared_ptr lifetime pattern).  If there's no other work to do at that
point then the io_service will also exit naturally.

If the close ended up queued after ReadHandler, then is_open() will
still be true, you will start a new read operation, and then either the
above occurs (if the read actually completes before it starts executing
the close), or the close does find something to abort and this will
enqueue ReadHandler with operation_aborted.

There is no race between checking is_open() and starting the next read
because both the ReadHandler and the close are occurring in the same
strand, so can't happen concurrently.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users