Understanding fibers

classic Classic list List threaded Threaded
14 messages Options
Reply | Threaded
Open this post in threaded view
|

Understanding fibers

Boost - Users mailing list
Hello @ll,

I am in the process of transforming a sync server application to an async approach. The framework in use (not asio) instantiates and calls an object representing the call when a request comes in, allows me to do things, fill a response object and then expects me to call a finish() method which will send the response and start the next request. This is not what this question is about but more like my circumstances. All code is pseudocode please...

void rpc_work() {
   const value = do_stuff(request);
   response.fill_value(value)
   parent->finish();
}

Now some of the calls will imply calling methods, which block on some operations, waiting for external resources to respond. Obviously in an async server with only one thread, this will block everything. So I'm looking for a clever way to structure this.

My approach so far (in the prototype) is to go boost::async() for those operations. Like this:

void rpc_work() {
   // prepare work
   boost::async(launch::any, [request, response, parent]() {
         const value = do_blocking_stuff(request);
         response.fill_value(value)
         parent->finish();
   });
}

The way I understand boost::async or indeed std::async is, that normally it will just spawn a thread for the operation. The best I can hope for is a thread pool but this is not guaranteed either (on Linux x64 gcc 7.3).

So I started looking into fibers but I'm having a hard time understanding them, which is why I post here. From what I gather, what I have here is a prime use case as it matches many things the docs talk about. And yet they also suggest that the fibers are very intrusive and everything underneath do_blocking_stuff() would have to be aware of being in a fiber and yield() when appropriate. Is this correct? What would this mean?

Most if not all of the blocking operations are using boost::thread::futures in their blocking things. They would do things like...

value_t do_blocking_stuff(request) {
   boost::future<value_t> fv = retrieve_value_from_somewhere();
   fv.wait_for(timeout);
   return fv.get()
}

Now, considering I would use fibers in the server, would I have to use 'fiber futures' here to make this work? To make the blocking functions automatically 'fiber aware' and yield() when they would block? This would imply replacing the thread::futures?
And if so, how will I know when the value is there and they are ready to continue? And how will I cause this continuation? Do I have to keep track of them and somehow poll() them in a loop or something?

I am aware of the broad nature of my question but perhaps someone can chime in and give some hints that allow me to understand them better. They look like they would fit my use case perfectly but I am a bit reluctant of dragging the concept all throughout the code until I realize it won't work.

Perhaps someone can eli5 this to me ;-)

Cheers,

Stephan




_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list
On 14/12/2018 03:19, Stephan Menzel wrote:
> So I started looking into fibers but I'm having a hard time
> understanding them, which is why I post here. From what I gather, what I
> have here is a prime use case as it matches many things the docs talk
> about. And yet they also suggest that the fibers are very intrusive and
> everything underneath do_blocking_stuff() would have to be aware of
> being in a fiber and yield() when appropriate. Is this correct? What
> would this mean?

That's correct -- fibers are intrusive and if you have blocking code
that you do not control, then fibers are not an appropriate solution for
you.

Fibers are conceptually similar to coroutines -- the main difference is
that for coroutines you explicitly choose which one you want to yield
to, whereas with fibers you yield to a "fiber scheduler" (which is
essentially just another coroutine that manages a list of coroutines and
timers) and let it choose which one to schedule based on ready queues
etc -- similar to OS threads, but all cooperatively multitasked within a
single OS thread.

But this means that you can't do anything to block the OS thread,
otherwise you're still blocking everything from making progress.

> Now, considering I would use fibers in the server, would I have to use
> 'fiber futures' here to make this work? To make the blocking functions
> automatically 'fiber aware' and yield() when they would block? This
> would imply replacing the thread::futures?

Yes.  As long as you replace *all* thread blocking with fiber blocking,
then that can work.

> And if so, how will I know when the value is there and they are ready to
> continue? And how will I cause this continuation? Do I have to keep
> track of them and somehow poll() them in a loop or something?

No, the fiber scheduler does that.  As is normal for futures, when
whatever holds the promise publishes the result to the promise, the
future can "wake up" and return that value.

It won't happen immediately (because there's only one OS thread), but it
can happen after the task that posted to the promise itself finishes or
yields.
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list
On Thu, Dec 13, 2018 at 10:22 PM Gavin Lambert via Boost-users <[hidden email]> wrote:

That's correct -- fibers are intrusive and if you have blocking code
that you do not control, then fibers are not an appropriate solution for
you.

Fibers are conceptually similar to coroutines -- the main difference is
that for coroutines you explicitly choose which one you want to yield
to, whereas with fibers you yield to a "fiber scheduler" (which is
essentially just another coroutine that manages a list of coroutines and
timers) and let it choose which one to schedule based on ready queues
etc -- similar to OS threads, but all cooperatively multitasked within a
single OS thread.

But this means that you can't do anything to block the OS thread,
otherwise you're still blocking everything from making progress.


Thank you for your response. That cleared things up big time. I think I have a better understanding now. 

Luckily I do have control over pretty much all the blocking code as I can see now. It is business logic using an asio based client library for redis I'm working on ( https://github.com/MrMoose/mredis ). This lib uses an io_service with one thread and implicit strand to do all it's work. As long as the client object exists, this thread runs. The interface to it basically puts promises into the connections that run in the io_service and continually do the work and set the values upon the promises. Users will issue commands, get a future in return and wait for it but they can also hand in callbacks.
Not such an easy thing to transform that into fiber as I am only starting with those but I suppose absolutely possible. Just to clarify, if you allow that one followup question:

If I simply add fiber futures to the interface, can I set the value to the fiber promise in that other thread running the io_service, leaving the basic architecture? This would allow me to still use the library with regular threading and in non-fiber scenarios, which I would very much like to. Alternatively, I could just hand in a callback with a fiber promise but this also means that the callback is executed in the thread that runs the client object. Hence my question.

But I do realize that is a tricky question to ask without knowledge of the library. You already helped a lot.

Thank you very much!

Stephan

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list
Well, I should have just tried rather than asking questions. 

On Fri, Dec 14, 2018 at 8:33 AM Stephan Menzel <[hidden email]> wrote:

If I simply add fiber futures to the interface, can I set the value to the fiber promise in that other thread running the io_service, leaving the basic architecture? This would allow me to still use the library with regular threading and in non-fiber scenarios, which I would very much like to. Alternatively, I could just hand in a callback with a fiber promise but this also means that the callback is executed in the thread that runs the client object. Hence my question.

I have just tried and wrote a little quick and dirty testcase. I have the results here, in case anyone is interested:


The code doesn't have any explicit yields(), which was important for me for some reason and yet does as expected. My taking away from this exercise are:
 
 * Yes it is possible to set the value of a fiber promise in another thread. At least I didn't see problems.
 * When I use the callback mechanism, I can transform the code without having to change the underlying library
 * Fibers are not as complicated as I thought they would be.

Once I realized the interface is very similar to threads and everything that would block (such as wait for the future to become ready) is pretty much equivalent to 'hand over to another fiber' it all fell into place. I do believe there could be a bit less intimidating examples in the docs though. Perhaps something like 'fibers for people already familiar with threads'... Turned out to be much less difficult than I thought.

Anyway, thanks again for your help. I am now equipped to use the fibers in my async server.
 
Cheers,
Stephan


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list
In reply to this post by Boost - Users mailing list
On 14/12/2018 20:33, Stephan Menzel wrote:

> Luckily I do have control over pretty much all the blocking code as I
> can see now. It is business logic using an asio based client library for
> redis I'm working on ( https://github.com/MrMoose/mredis ). This lib
> uses an io_service with one thread and implicit strand to do all it's
> work. As long as the client object exists, this thread runs. The
> interface to it basically puts promises into the connections that run in
> the io_service and continually do the work and set the values upon the
> promises. Users will issue commands, get a future in return and wait for
> it but they can also hand in callbacks.
> Not such an easy thing to transform that into fiber as I am only
> starting with those but I suppose absolutely possible. Just to clarify,
> if you allow that one followup question:
>
> If I simply add fiber futures to the interface, can I set the value to
> the fiber promise in that other thread running the io_service, leaving
> the basic architecture? This would allow me to still use the library
> with regular threading and in non-fiber scenarios, which I would very
> much like to. Alternatively, I could just hand in a callback with a
> fiber promise but this also means that the callback is executed in the
> thread that runs the client object. Hence my question.

If you're already using Boost.Asio, then you can just use that, without
mixing in Boost.Fiber.

Asio already supports coroutines and a std::future interface -- although
note that these are thread-blocking futures and are intended only for
use for callers *outside* the main I/O thread(s).

Inherently though an Asio io_context running on a single thread *is* a
kind of fiber scheduler for operations posted through that context,
including both actual async_* operations and arbitrary posted and
spawned work.

See:
   *
https://www.boost.org/doc/libs/1_69_0/doc/html/boost_asio/overview/core/spawn.html
   *
https://www.boost.org/doc/libs/1_69_0/doc/html/boost_asio/overview/cpp2011/futures.html
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list
Hi Gavin,

On Sun, Dec 16, 2018 at 11:42 PM Gavin Lambert via Boost-users <[hidden email]> wrote:

If you're already using Boost.Asio, then you can just use that, without
mixing in Boost.Fiber.

Asio already supports coroutines and a std::future interface -- although
note that these are thread-blocking futures and are intended only for
use for callers *outside* the main I/O thread(s).


Yes, I have been using asio all over the place for many years but I have never used the coroutine interface. Only recently discovered it and plan to use it. I don't however see how I can integrate it into my plans here. 
First, this library uses asio but with that one thread and I cannot intrusively change that lib into using fibers because I can't imply that for every use case scenario. I'd rather shield the internal working from the user.
Second, the coroutine interface seems to work on the basis of special async operations within asio that allow for this to work. I don't have those. Like when I consider a mocked asio coroutine usage like

loop {
   asio::async_read( ..params.., yield, ec);
   handle_error(ec);
   asio::async_write( ...params..., yield, ec);
   handle_error(ec);
}

This works because asio offers those async ops that take the coroutine object and allow continuation. My code doesn't have that. At some point I do have to wait on those futures.

loop {
   boost::future<int> result = my_redis.get("value");
   const int value = result.get();
   //..continue
}

And even if it were fiber futures that doesn't change much:

loop {
   boost::fibers::future<int> result = my_redis.get("value");
   const int value = result.get();
   //..continue
}

Asio would still magically have to 'know' that it can switch to another fiber inside the get(). I have seen this page here: https://www.boost.org/doc/libs/1_69_0/libs/fiber/doc/html/fiber/callbacks/then_there_s____boost_asio__.html which I assume talks about this very thing but unfortunately this is way over my head.

Inherently though an Asio io_context running on a single thread *is* a
kind of fiber scheduler for operations posted through that context,
including both actual async_* operations and arbitrary posted and
spawned work.

Yes, in a way I do see that and I am investigating use of asio here. If only because this is normally my go-to solution in those cases. Which was my original approach before I started looking into fibers. But I got nowhere.
What I would need for this to work would be the mock-up code above. I'd have to be able to post a handler into the io_context which can wait on those futures without blocking the io_service. I considered spawing a great many threads on this io_context so I could stomach a number of them blocking without bogging down everything too much but this seems just wrong.

However, I will continue to explore this option as you are right, I think the solution is just there, I'd only have to see it. 

Cheers,
Stephan


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list

Hi,

 

I’m jumping into the discussion, but I’ve noticed your concern here:

 

  • This works because asio offers those async ops that take the coroutine object and allow continuation. My code doesn't have that. At some point I do have to wait on those futures.

 

I’m in the process of implementing something similar, also with coroutines. On Windows, my plan for solving this is by creating a (native) autoreset event object and assign it to windows::object_handle. Then I use async_wait on the object. When the event object is signaled, the waiting coroutine will be resumed. So, in effect, this implements a non-blocking signal / “future”.

 

On POSIX there are two ways, and both are hack-ish. You could use signal_set to wait for a specific signal (but signals + threads = UGH!, many pitfalls) or create an anonymous pipe; reading a byte from the pipe is equivalent to waiting on a signal object, while writing a byte to it is equivalent to setting a signal. Such pipe implements in effect an async-awaitable semaphpore.

 

  • Stian

 

From: Boost-users <[hidden email]> On Behalf Of Stephan Menzel via Boost-users
Sent: Monday, December 17, 2018 08:00
To: Boost users list <[hidden email]>
Cc: Stephan Menzel <[hidden email]>
Subject: Re: [Boost-users] Understanding fibers

 

Hi Gavin,

 

On Sun, Dec 16, 2018 at 11:42 PM Gavin Lambert via Boost-users <[hidden email]> wrote:


If you're already using Boost.Asio, then you can just use that, without
mixing in Boost.Fiber.

Asio already supports coroutines and a std::future interface -- although
note that these are thread-blocking futures and are intended only for
use for callers *outside* the main I/O thread(s).

 

Yes, I have been using asio all over the place for many years but I have never used the coroutine interface. Only recently discovered it and plan to use it. I don't however see how I can integrate it into my plans here. 

First, this library uses asio but with that one thread and I cannot intrusively change that lib into using fibers because I can't imply that for every use case scenario. I'd rather shield the internal working from the user.

Second, the coroutine interface seems to work on the basis of special async operations within asio that allow for this to work. I don't have those. Like when I consider a mocked asio coroutine usage like

 

loop {

   asio::async_read( ..params.., yield, ec);

   handle_error(ec);

   asio::async_write( ...params..., yield, ec);

   handle_error(ec);

}

 

This works because asio offers those async ops that take the coroutine object and allow continuation. My code doesn't have that. At some point I do have to wait on those futures.

 

loop {

   boost::future<int> result = my_redis.get("value");

   const int value = result.get();

   //..continue

}

 

And even if it were fiber futures that doesn't change much:

 

loop {

   boost::fibers::future<int> result = my_redis.get("value");

   const int value = result.get();

   //..continue

}

 

Asio would still magically have to 'know' that it can switch to another fiber inside the get(). I have seen this page here: https://www.boost.org/doc/libs/1_69_0/libs/fiber/doc/html/fiber/callbacks/then_there_s____boost_asio__.html which I assume talks about this very thing but unfortunately this is way over my head.

 

Inherently though an Asio io_context running on a single thread *is* a
kind of fiber scheduler for operations posted through that context,
including both actual async_* operations and arbitrary posted and
spawned work.

 

Yes, in a way I do see that and I am investigating use of asio here. If only because this is normally my go-to solution in those cases. Which was my original approach before I started looking into fibers. But I got nowhere.

What I would need for this to work would be the mock-up code above. I'd have to be able to post a handler into the io_context which can wait on those futures without blocking the io_service. I considered spawing a great many threads on this io_context so I could stomach a number of them blocking without bogging down everything too much but this seems just wrong.

 

However, I will continue to explore this option as you are right, I think the solution is just there, I'd only have to see it. 

 

Cheers,

Stephan

 


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list
Hello Stian,

On Mon, Dec 17, 2018 at 9:54 AM Stian Zeljko Vrba <[hidden email]> wrote:

Hi,

 

I’m jumping into the discussion, but I’ve noticed your concern here:

 

I’m in the process of implementing something similar, also with coroutines. On Windows, my plan for solving this is by creating a (native) autoreset event object and assign it to windows::object_handle. Then I use async_wait on the object. When the event object is signaled, the waiting coroutine will be resumed. So, in effect, this implements a non-blocking signal / “future”.


Yes, this seems like a good way of describing it.
I was gonna say something like asio::async_get_future(), which would take a fiber future or a regular one. This would fit perfectly. I could just run an io_service with one thread next to the async server and whenever the async server spits out a new request I could post it right into this io_service. The link Gavin posted, the way I understand it, pretty much describes the other end of this. An async operation that returns a future and I can wait on that on the outside.
Still, this page here: https://www.boost.org/doc/libs/1_69_0/libs/fiber/doc/html/fiber/integration/deeper_dive_into___boost_asio__.html made it clear to me that asio and fibers at this point cannot easily be used together without some real black magick. 

On POSIX there are two ways, and both are hack-ish. You could use signal_set to wait for a specific signal (but signals + threads = UGH!, many pitfalls) or create an anonymous pipe; reading a byte from the pipe is equivalent to waiting on a signal object, while writing a byte to it is equivalent to setting a signal. Such pipe implements in effect an async-awaitable semaphpore.


Well, to be honest, both solutions seem quite hacky and platform depend to me. Also, I have nowhere near the skills or time frame to implement this. Neither would I trust my solution. I'd rather trade in some performance and go for something a lot less perfect.

I'm looking into spawning a thread in which I can spawn a fiber for each of the requests coming in and then use the fiber futures described earlier. My reasoning is that even though I cannot re-use them, spawning a fiber should still be faster than spawning a thread. A lock free Q of handlers could be used to post handlers into that thread. Have to figure out a way to prevent the starvation issue described in above link. They describe a situation when every fiber waits on a future, nothing is waking them up to poll new items from the hypothetical queue. Apparently, using a fast paced timer to ping them seems to be the way to go. Quite icky as well. Perhaps something more reasonable can be found, but I'm just rambling on here.

Thanks for your suggestion!

Stephan


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list

Hi,

 

Unfortunately, I can’t help you with fibers: I’ve went with coroutines https://www.boost.org/doc/libs/1_68_0/doc/html/boost_asio/overview/core/spawn.html not the least because it seems (! – I have to test this, I don’t fully trust the documentation on this) that exceptions can propagate nicely out of a coroutine handler and to the top-level event loop. According to the documentation, this isn’t the case for fibers, if an unhandled exception propagates out of the fiber’s stack frame, the program is terminated.

 

Though I have a comment/personal experience on this one:

 

  • Well, to be honest, both solutions seem quite hacky and platform depend to me. Also, I have nowhere near the skills or time frame to implement this. Neither would I trust my solution. I'd rather trade in some performance and go for something a lot less perfect.

 

IME, platform-specific APIs _are_ the fastest way forward; you know what’s going on and there are no additional asio abstractions to code against.

 

// Rant

 

The project I’m working on started on Linux, and there you arguably need asio due to lack of proper async notifications from the kernel to the userspace, so the programming model there is just friendlier with asio (hah!). Now that I’ve fully migrated the project to Windows, I’m only waiting for the opportunity/time to rip out most of asio and use Windows native APIs.

 

.. when the state of C++ networking libraries has reached the point where it’s easier to code against raw windows API, something has gone wrong in the design of those libraries.

 

// Rant

 

  • Stian

 

From: Boost-users <[hidden email]> On Behalf Of Stephan Menzel via Boost-users
Sent: Monday, December 17, 2018 11:20
To: Boost users list <[hidden email]>
Cc: Stephan Menzel <[hidden email]>
Subject: Re: [Boost-users] Understanding fibers

 

Hello Stian,

 

On Mon, Dec 17, 2018 at 9:54 AM Stian Zeljko Vrba <[hidden email]> wrote:

Hi,

 

I’m jumping into the discussion, but I’ve noticed your concern here:

 

I’m in the process of implementing something similar, also with coroutines. On Windows, my plan for solving this is by creating a (native) autoreset event object and assign it to windows::object_handle. Then I use async_wait on the object. When the event object is signaled, the waiting coroutine will be resumed. So, in effect, this implements a non-blocking signal / “future”.

 

Yes, this seems like a good way of describing it.

I was gonna say something like asio::async_get_future(), which would take a fiber future or a regular one. This would fit perfectly. I could just run an io_service with one thread next to the async server and whenever the async server spits out a new request I could post it right into this io_service. The link Gavin posted, the way I understand it, pretty much describes the other end of this. An async operation that returns a future and I can wait on that on the outside.

Still, this page here: https://www.boost.org/doc/libs/1_69_0/libs/fiber/doc/html/fiber/integration/deeper_dive_into___boost_asio__.html made it clear to me that asio and fibers at this point cannot easily be used together without some real black magick. 

 

On POSIX there are two ways, and both are hack-ish. You could use signal_set to wait for a specific signal (but signals + threads = UGH!, many pitfalls) or create an anonymous pipe; reading a byte from the pipe is equivalent to waiting on a signal object, while writing a byte to it is equivalent to setting a signal. Such pipe implements in effect an async-awaitable semaphpore.

 

Well, to be honest, both solutions seem quite hacky and platform depend to me. Also, I have nowhere near the skills or time frame to implement this. Neither would I trust my solution. I'd rather trade in some performance and go for something a lot less perfect.

 

I'm looking into spawning a thread in which I can spawn a fiber for each of the requests coming in and then use the fiber futures described earlier. My reasoning is that even though I cannot re-use them, spawning a fiber should still be faster than spawning a thread. A lock free Q of handlers could be used to post handlers into that thread. Have to figure out a way to prevent the starvation issue described in above link. They describe a situation when every fiber waits on a future, nothing is waking them up to poll new items from the hypothetical queue. Apparently, using a fast paced timer to ping them seems to be the way to go. Quite icky as well. Perhaps something more reasonable can be found, but I'm just rambling on here.

 

Thanks for your suggestion!

 

Stephan

 


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list

As an aside, integrating kernel-delivered callbacks with Fibers seems to be much more straightforward https://www.boost.org/doc/libs/1_68_0/libs/fiber/doc/html/fiber/callbacks/overview.html (and its sibling subsections).

 

Then you could use fibers with futures, etc., and keep your design.

 

To that end, Linux has aio_* family of syscalls (e.g., https://linux.die.net/man/3/aio_read) These had a bad reputation in the past, but whether the situation has improved today, I do not know. Again, platform-specific solution might be the best way to go with fibers.

 

From: Stian Zeljko Vrba
Sent: Monday, December 17, 2018 12:48
To: '[hidden email]' <[hidden email]>
Cc: Stephan Menzel <[hidden email]>
Subject: RE: [Boost-users] Understanding fibers

 

Hi,

 

Unfortunately, I can’t help you with fibers: I’ve went with coroutines https://www.boost.org/doc/libs/1_68_0/doc/html/boost_asio/overview/core/spawn.html not the least because it seems (! – I have to test this, I don’t fully trust the documentation on this) that exceptions can propagate nicely out of a coroutine handler and to the top-level event loop. According to the documentation, this isn’t the case for fibers, if an unhandled exception propagates out of the fiber’s stack frame, the program is terminated.

 

Though I have a comment/personal experience on this one:

 

  • Well, to be honest, both solutions seem quite hacky and platform depend to me. Also, I have nowhere near the skills or time frame to implement this. Neither would I trust my solution. I'd rather trade in some performance and go for something a lot less perfect.

 

IME, platform-specific APIs _are_ the fastest way forward; you know what’s going on and there are no additional asio abstractions to code against.

 

// Rant

 

The project I’m working on started on Linux, and there you arguably need asio due to lack of proper async notifications from the kernel to the userspace, so the programming model there is just friendlier with asio (hah!). Now that I’ve fully migrated the project to Windows, I’m only waiting for the opportunity/time to rip out most of asio and use Windows native APIs.

 

.. when the state of C++ networking libraries has reached the point where it’s easier to code against raw windows API, something has gone wrong in the design of those libraries.

 

// Rant

 

  • Stian

 

From: Boost-users <[hidden email]> On Behalf Of Stephan Menzel via Boost-users
Sent: Monday, December 17, 2018 11:20
To: Boost users list <[hidden email]>
Cc: Stephan Menzel <[hidden email]>
Subject: Re: [Boost-users] Understanding fibers

 

Hello Stian,

 

On Mon, Dec 17, 2018 at 9:54 AM Stian Zeljko Vrba <[hidden email]> wrote:

Hi,

 

I’m jumping into the discussion, but I’ve noticed your concern here:

 

I’m in the process of implementing something similar, also with coroutines. On Windows, my plan for solving this is by creating a (native) autoreset event object and assign it to windows::object_handle. Then I use async_wait on the object. When the event object is signaled, the waiting coroutine will be resumed. So, in effect, this implements a non-blocking signal / “future”.

 

Yes, this seems like a good way of describing it.

I was gonna say something like asio::async_get_future(), which would take a fiber future or a regular one. This would fit perfectly. I could just run an io_service with one thread next to the async server and whenever the async server spits out a new request I could post it right into this io_service. The link Gavin posted, the way I understand it, pretty much describes the other end of this. An async operation that returns a future and I can wait on that on the outside.

Still, this page here: https://www.boost.org/doc/libs/1_69_0/libs/fiber/doc/html/fiber/integration/deeper_dive_into___boost_asio__.html made it clear to me that asio and fibers at this point cannot easily be used together without some real black magick. 

 

On POSIX there are two ways, and both are hack-ish. You could use signal_set to wait for a specific signal (but signals + threads = UGH!, many pitfalls) or create an anonymous pipe; reading a byte from the pipe is equivalent to waiting on a signal object, while writing a byte to it is equivalent to setting a signal. Such pipe implements in effect an async-awaitable semaphpore.

 

Well, to be honest, both solutions seem quite hacky and platform depend to me. Also, I have nowhere near the skills or time frame to implement this. Neither would I trust my solution. I'd rather trade in some performance and go for something a lot less perfect.

 

I'm looking into spawning a thread in which I can spawn a fiber for each of the requests coming in and then use the fiber futures described earlier. My reasoning is that even though I cannot re-use them, spawning a fiber should still be faster than spawning a thread. A lock free Q of handlers could be used to post handlers into that thread. Have to figure out a way to prevent the starvation issue described in above link. They describe a situation when every fiber waits on a future, nothing is waking them up to poll new items from the hypothetical queue. Apparently, using a fast paced timer to ping them seems to be the way to go. Quite icky as well. Perhaps something more reasonable can be found, but I'm just rambling on here.

 

Thanks for your suggestion!

 

Stephan

 


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list
In reply to this post by Boost - Users mailing list


Unfortunately, I can’t help you with fibers: I’ve went with coroutines https://www.boost.org/doc/libs/1_68_0/doc/html/boost_asio/overview/core/spawn.html not the least because it seems (! – I have to test this, I don’t fully trust the documentation on this) that exceptions can propagate nicely out of a coroutine handler and to the top-level event loop. According to the documentation, this isn’t the case for fibers, if an unhandled exception propagates out of the fiber’s stack frame, the program is terminated.

boost.coroutine(2) and boost.fiber are base on boost.context - exceptions can be transported between different contexts. boost.fiber is modeled after std::thread -> if you use fiber::future<> you get the exceptions propagated from the fiber.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list

Hi, I could you please clarify in more detail what you just wrote?

 

Asio uses coroutine V1 which documentation implies that transportation of exceptions across contexts is automatic, notably ctor/operator() throwing an exception originating on “the other end” of the asymmetric coroutine pair. This seems to also be implied by the following warning in the documentation: “Code executed by coroutine-function must not prevent the propagation of the detail::forced_unwind exception. Absorbing that exception will cause stack unwinding to fail. Thus, any code that catches all exceptions must re-throw any pending detail::forced_unwind exception.”

 

On the other hand, the documentation for boost context has the following comment: “If the function executed inside a execution_context emits ans exception, the application is terminated by calling std::terminate(). std::exception_ptr can be used to transfer exceptions between different execution contexts.” This seems to imply that the execution context is standalone and if an exception propagates, the process is hosed.

 

So, in the context of asio, say we have

 

  1. Coroutine … something_async(yield[ec]);
  2. Fiber … something(ec);

              If (ec) throw std::runtime_error(“”);

 

called from the io service loop. There is no catch in the above piece of code. What happens when:

 

  1. The above is inside a _coroutine_ and invoked as a callback? (my guess: the caller – asio thread – arranges/provides a catch context that the exception propagates into)
  2. The above is inside a _fiber_ and called by yielding to the fiber scheduler? (my guess: the fiber scheduler does not provide an outer catch context, so the program crashes).

 

Is this correct?

 

From: Boost-users <[hidden email]> On Behalf Of Oliver Kowalke via Boost-users
Sent: Monday, December 17, 2018 13:47
To: boost-users <[hidden email]>
Cc: Oliver Kowalke <[hidden email]>
Subject: Re: [Boost-users] Understanding fibers

 

 

 

Unfortunately, I can’t help you with fibers: I’ve went with coroutines https://www.boost.org/doc/libs/1_68_0/doc/html/boost_asio/overview/core/spawn.html not the least because it seems (! – I have to test this, I don’t fully trust the documentation on this) that exceptions can propagate nicely out of a coroutine handler and to the top-level event loop. According to the documentation, this isn’t the case for fibers, if an unhandled exception propagates out of the fiber’s stack frame, the program is terminated.

 

boost.coroutine(2) and boost.fiber are base on boost.context - exceptions can be transported between different contexts. boost.fiber is modeled after std::thread -> if you use fiber::future<> you get the exceptions propagated from the fiber.


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list
On 18/12/2018 02:07, Stian Zeljko Vrba wrote:
>  1. The above is inside a _/coroutine/_ and invoked as a callback? (my
>     guess: the caller – asio thread – arranges/provides a catch context
>     that the exception propagates into)
>  2. The above is inside a _/fiber/_ and called by yielding to the fiber
>     scheduler? (my guess: the fiber scheduler does not provide an outer
>     catch context, so the program crashes).
>
> Is this correct?

Yes, both of those are somewhat correct.  Though there are some missing
links.

With coroutines, you're yielding to a specific other coroutine -- if
that coroutine throws, then the exception is thrown out of the yield
point just as if it were a regular function call.

Asio itself doesn't catch any exceptions -- if a handler throws then it
will be propagated out and into the method that calls io_context.run()
-- if this doesn't have a try-catch block wrapped around it by the user
then the thread and the app will be terminated.

Fibers aren't allowed to propagate exceptions into the fiber scheduler,
since you can't customise the exception handling there -- but if you
don't call your fiber code directly but instead wrap it in a
packaged_task, this gives you a future and also wraps the code in a
try-catch block such that if the code throws an exception it will store
the exception into the future (where it can be rethrown at the consuming
site) instead of terminating.  (Thread-based futures work the same way.)
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: Understanding fibers

Boost - Users mailing list

Hi,


thanks for the clarifications!


> Asio itself doesn't catch any exceptions -- if a handler throws then it will be propagated out and into the method that calls io_context.run() -- if this doesn't have a try-catch block wrapped around it by the user then the thread and the app will be terminated.

Yes, I was aware of that, I just wrote it imprecisely. What I meant is what you wrote 😊 (i.e., that coroutine machinery arranges for correct exception propagation from the callee to the caller).


Unrelated: do you have advice on error-handling patterns? Given two "loops", one that reads data from a socket and another that writes data to a socket. Currently, I handle errors with the following pattern:


error_code ec;

something_async(…, ec);

if (!HandleError(ec, "something_async"))

return; // shared_from_this() pattern


where HandleError is defined as


bool HandleError(const error_code& ec, const char* what)
{
    if (IsCanceled() || !_socket.is_open() || ec == boost::system::errc::operation_canceled)
        return false;
    if (!ec)
        return true;
    throw Failure(WeakSelf(), ec, what);
}

(IsCanceled() returns the state of an internal Boolean flag.)

because 1) the same io service thread is running multiple workers, 2) I need to know _which_ worker failed because the worker is associated with "extra data" in the io_service thread and this data may contain "restart" policy (e.g. reconnect on broken connection). Also, the outer loop will cancel the worker in case Failure() has been thrown.

But this seems to combine the worst of two worlds, as both exceptions and error codes are used. If I don't pass error_code to async calls, asio will throw system_error, but that one is not descriptive enough for my purposes.

Is there a way to automate this, or some other recommended "pattern" to use?

-- Stian



From: Boost-users <[hidden email]> on behalf of Gavin Lambert via Boost-users <[hidden email]>
Sent: Monday, December 17, 2018 10:28 PM
To: [hidden email]
Cc: Gavin Lambert
Subject: Re: [Boost-users] Understanding fibers
 
On 18/12/2018 02:07, Stian Zeljko Vrba wrote:
>  1. The above is inside a _/coroutine/_ and invoked as a callback? (my
>     guess: the caller – asio thread – arranges/provides a catch context
>     that the exception propagates into)
>  2. The above is inside a _/fiber/_ and called by yielding to the fiber
>     scheduler? (my guess: the fiber scheduler does not provide an outer
>     catch context, so the program crashes).
>
> Is this correct?

Yes, both of those are somewhat correct.  Though there are some missing
links.

With coroutines, you're yielding to a specific other coroutine -- if
that coroutine throws, then the exception is thrown out of the yield
point just as if it were a regular function call.

Asio itself doesn't catch any exceptions -- if a handler throws then it
will be propagated out and into the method that calls io_context.run()
-- if this doesn't have a try-catch block wrapped around it by the user
then the thread and the app will be terminated.

Fibers aren't allowed to propagate exceptions into the fiber scheduler,
since you can't customise the exception handling there -- but if you
don't call your fiber code directly but instead wrap it in a
packaged_task, this gives you a future and also wraps the code in a
try-catch block such that if the code throws an exception it will store
the exception into the future (where it can be rethrown at the consuming
site) instead of terminating.  (Thread-based futures work the same way.)
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users