Boost.HTTPKit, a new library from the makers of Beast!

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|

Boost.HTTPKit, a new library from the makers of Beast!

Boost - Dev mailing list
During the formal review of Beast the desire was expressed to have
parsing and serialization available independently of Asio.
Unfortunately the obstacles to this are the concrete functions and
types required for interoperability with the buffer sequence concepts
defined in Asio.

I have drawn up a solution to this problem which involves creating two
brand new libraries.

The first new library is called Boost.Buffers, and I described it
partially in an earlier list posting. This library will contain copies
of just the header files from Asio needed to implement the buffer
sequence concepts. A preprocessor switch will allow users of
Boost.Buffers to decide whether to use the copies of the headers, or
to just include the Asio headers directly:

<https://github.com/vinniefalco/buffers/blob/30ef7031ec0909972a720c0cdd8d6c6e1cc9e37b/include/boost/buffers/asio.hpp#L13>

If an author wants to develop a library which uses just the buffer
sequence concepts from Asio, then instead of writing:

    #include <boost/asio/buffer.hpp>

they will instead write:

    #include <boost/buffers/asio.hpp>

in header files, and in their build scripts set:

    #define BOOST_NET_BUFFER_NO_ASIO 1

This way, their tests and example programs will use the copies of the
asio headers found in the Boost.Buffers libraries and everything will
work. Boost.Asio will not be a dependency.

However, if a consumer of that library wants to also use Boost.Asio,
they can do so by just not setting the macro in their build scripts
(or explicitly setting it to 0).

Now that we have a way to write a library that uses Asio buffer
concepts without depending on all of Asio, we can move the Beast
dynamic buffer implementations and buffer adapters into the
Boost.Buffers library.

Boost.Buffers will contain these public includes:

Boost.Buffers
    <boost/buffers.hpp>
    <boost/buffers/asio.hpp>
        boost::asio::const_buffer
        boost::asio::const_buffers_1
        boost::asio::mutable_buffer
        boost::asio::mutable_buffers_1
        boost::asio::buffer_copy
        boost::asio::buffer_size
        boost::asio::buffer_cast
        boost::asio:is_const_buffer_sequence
        boost::asio:is_mutable_buffer_sequence
    <boost/buffers/buffers_adapter.hpp>
    <boost/buffers/buffers_cat.hpp>
    <boost/buffers/buffers_prefix.hpp>
    <boost/buffers/buffers_suffix.hpp>
    <boost/buffers/buffers_to_string.hpp>
    <boost/buffers/flat_buffer.hpp>
    <boost/buffers/flat_static_buffer.hpp>
    <boost/buffers/multi_buffer.hpp>
    <boost/buffers/ostream.hpp>
    <boost/buffers/read_size.hpp>
    <boost/buffers/static_buffer.hpp>

Boost.Buffers would have dependencies on these Boost libraries:

    Array
    Assert
    Config
    Core
    Exception
    Intrusive
    StaticAssert
    ThrowException
    TypeTraits

Beast will then be modified to take a dependency on Boost.Buffers to
have access to all the buffer adapters and dynamic buffer
implementations.

Now that we have a new library which offers Asio buffer concepts and
Beast's collection of useful dynamic buffers and buffer adapters, we
can extract the HTTP parsing and serialization algorithms from Beast
to form a new library:

Boost.HTTPKit will contain these public includes:

Boost.HTTPKit
    <boost/http/basic_file_body.hpp>
    <boost/http/basic_dynamic_body.hpp>
    <boost/http/basic_parser.hpp>
    <boost/http/buffer_body.hpp>
    <boost/http/chunk_encode.hpp>
    <boost/http/dynamic_body.hpp>
    <boost/http/empty_body.hpp>
    <boost/http/parse_error.hpp>
    <boost/http/field.hpp>
    <boost/http/fields.hpp>
    <boost/http/message.hpp>
    <boost/http/parser.hpp>
    <boost/http/rfc7230.hpp>
    <boost/http/serializer.hpp>
    <boost/http/span_body.hpp>
    <boost/http/status.hpp>
    <boost/http/string_body.hpp>
    <boost/http/string_param.hpp>
    <boost/http/type_traits.hpp>
    <boost/http/vector_body.hpp>
    <boost/http/verb.hpp>

Boost.HTTPKit would have dependencies on these Boost libraries:

    Array
    Assert
    Config
    Core
    Exception
    Intrusive
    StaticAssert
    ThrowException
    TypeTraits

This new library provides serialization and parsing of HTTP/1 messages
to and from Asio buffer sequences, without requiring the full
Boost.Asio dependency. The library also provides Beast's universal
HTTP message container.

Beast will be modified to take an additional dependency on Boost.HTTPKit.

This solution satisfies long-stated needs of users to have HTTP
parsing and serialization without Boost.Asio.

However, there is one problem and that is the documentation. All of
Beast's documentation related to buffer-oriented parsing and
serialization, message containers, Body types, Fields concept, buffer
sequences, and dynamic buffers will now be moved into two other
libraries. This leaves the Beast documentation objectively worse off
for users, as it is impossible to cross-link to other library
documentation from inside a Javadoc comment extracted by Doxygen. I
would like to hear ideas for how to smooth this out.

Questions:

How does the community feel about:

* Boost.Buffers as a solution to accessing buffer concepts without Asio?

* Boost.Buffers offering Beast's buffer sequence adapters and dynamic buffers?

* Boost.HTTPKit depending on Boost.Buffers?

* Boost.HTTPKit offering serialization and parsing without Asio?

Any feedback is appreciated

Thanks

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Boost.HTTPKit, a new library from the makers of Beast!

Boost - Dev mailing list
2017-10-04 12:22 GMT-03:00 Vinnie Falco via Boost <[hidden email]>:

> This solution satisfies long-stated needs of users to have HTTP
> parsing and serialization without Boost.Asio.
>

Could you compare this to my parser[1]? I've been writing this project
since the last year[2].

It could be that I'm slow, but I'm starting to think you like to rush
design decisions.

[1] https://vinipsmaker.github.io/asiohttpserver/
[2] https://gist.github.com/vinipsmaker/4998ccfacb971a0dc1bd

--
Vinícius dos Santos Oliveira
https://vinipsmaker.github.io/

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Boost.HTTPKit, a new library from the makers of Beast!

Boost - Dev mailing list
On Wed, Oct 4, 2017 at 9:44 AM, Vinícius dos Santos Oliveira
<[hidden email]> wrote:
> Could you compare this to my parser[1]?

HTTP parsing is a pretty well defined problem and if you
want to have decent performance, there aren't a whole
lot of different ways to write it. Looking over your code
I don't see any obvious implementation mistakes although
as far as the interface goes I would have made different
choices, as you can see by looking at the declaration:

<https://github.com/boostorg/beast/blob/7fe74b1bf544a64ecf8985fde44abf88f9902251/include/boost/beast/http/basic_parser.hpp#L171>

The biggest, obvious difference is that beast::http::basic_parser
and beast::http::serializer are already in Boost, and have gotten
extensive testing and vetting from stakeholders. It is also
running on production servers that handle hundreds of
millions of dollars worth of financial transactions per month.

> It could be that I'm slow

I don't know if I would say that you're slow as I have not
observed your workflow but a comparison is probably unfair,
as I have the opportunity to work on Beast and related projects
such as Boost.Buffers and Boost.HTTPKit full-time.

> but I'm starting to think you like to rush design decisions.

Now that is really unfair, and ignores the enormous amount
of hours not just from me but from all of the other people
who participated in the design of Beast (note that I invited
you to be one of those participants).

Nothing about Beast was rushed, in fact for 3 months in
2016 I was blocked on the parser design because I could
not figure out an elegant interface that worked with the
stream algorithms and allowed the users to supply their
own buffer for the body. I tried several different designs,
if you look in the commit log you can find those alternate
designs.

Here are just a few of the design collaborations that helped
to get Boost.Beast where it is at today:

    Progressive body reading (122 comments!)
    <https://github.com/boostorg/beast/issues/154>

    Split parsing / headers first
    <https://github.com/boostorg/beast/issues/132>

    Expect: 100-continue design
    <https://github.com/boostorg/beast/issues/123>

    Message container constructors
    <https://github.com/boostorg/beast/issues/13>

    Fields concept and allocator support
    <https://github.com/boostorg/beast/issues/124>

    Refactor Body types
    <https://github.com/boostorg/beast/issues/580>

    Asio deallocate-before-invoke contract
    <https://github.com/boostorg/beast/issues/215>

And this does not include over a hundred hours working
with engineers from Ripple (Beast was originally designed
for them) on shared code pads to create the initial container
and stream algorithm ideas.

Quite a lot of time went into the design of Beast in fact there
was far more time spent designing than actual coding. And
I couldn't have done it without the very helpful interaction with
actual users of the library; Beast was shaped by the needs of
stakeholders.

As a reminder to everyone, the Beast pull request / issues
queue is open to all and I welcome anyone who wants to
participate in the design of the library to jump right in and
comment on any of the issues or pull requests. You can see
that there are a number of open issues that have unresolved
design questions:

<https://github.com/boostorg/beast/issues?q=is%3Aissue+is%3Aopen+label%3ADesign>

Thanks!

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Boost.HTTPKit, a new library from the makers of Beast!

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
> How does the community feel about:
>
> * Boost.Buffers as a solution to accessing buffer concepts without Asio?
>
> * Boost.Buffers offering Beast's buffer sequence adapters and dynamic buffers?

I reiterate from my Beast review that the best design for these above is
to use:

* span<T>

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0122r5.pdf

* Ranges TS

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/n4685.pdf

Both of these can integrate with the Networking TS, but not the other
way round due to ASIO's legacy design.

You chose, against my advice, to base Beast on the outdated and hard
coded buffer sequence design in ASIO. It was not a showstopper for Beast
because Beast was so closely tied into ASIO, so I recommended acceptance
for Beast despite this design choice.

But for a standalone library, things are different. I would advise
strongly against accepting a Boost.Buffers library unless it is based on
span<T> and Ranges and is forward looking into C++ 20 design patterns,
not backwards at C++ 98 era design patterns like ASIO's buffer sequences.

Regarding HTTPKit etc, I haven't looked into it much, but if I remember
I had some issues with your implementation and design there too
specifically the lack of use of zero copy views. Again, as part of Beast
wholly dependent on ASIO and thus limited by ASIO's limits, that's
acceptable. As a standalone library where a much wider use case applies,
I think I would need to set a much higher bar in any review. But I
haven't looked into it, you may have completely changed the design and
implementation since I last looked.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Boost.HTTPKit, a new library from the makers of Beast!

Boost - Dev mailing list
On Wed, Oct 4, 2017 at 4:18 PM, Niall Douglas via Boost
<[hidden email]> wrote:
> I reiterate from my Beast review that the best design for these above is
> to use:
>
> * span<T>
> * Ranges TS

First of all, thanks for investing the time to respond, your feedback
is appreciated. I think this might be one of those rare cases where I
am in partial agreement. A design that uses standard library types is
better than one which requires `std::experimental::net::const_buffer`
or `boost::asio::const_buffer`. However, a couple of points:

* Ranges TS is unnecessary, as ConstBufferSequence and
MutableBufferSequence are already modeled after ForwardRange.

* `span<byte>` and `span<byte const>` are not great choices because
`span` and `byte` are not part of Boost and not part of the standard
library. The closest we can get and still have something compatible
with C++11 is `pair<void const*, size_t>` and `pair<void*, size_t>`.

* Paul's suggestion to change the requirements of
ConvertibleToConstBuffer is even better. Change the definition of
`const_buffer` to include conversion constructors that can accept a
more general concept, such as any class which implements data() and
size().

If I was to re-implement Beast's buffer adapters, dynamic buffers,
parser, and serializer to use these ideas then they would no longer be
compatible with Asio. Using them would be more cumbersome and less
composable with other standard components which use Asio buffer
concepts.

Feedback from users is that they overwhelmingly prefer solutions that
work out of the box over adherance to some model of buffer
abstraction. I have no control over what happens in Boost.Asio and I
do not control the evolution of the Networking TS although I have
filed some additional issues against it.

For better or worse, Boost.Asio is the model that we have and what
people are coding against. Therefore, as with Beast my design choice
here is pragmatic - use what exists, and what works. The approach
offered in Boost.Buffers allows libraries to be written which do not
depend on all of Boost.Asio, but yet offer compatibility with Asio's
buffer concepts.

There's a light at the end of the tunnel though, if you can convince
the Asio author and influence the evolution of Networking-TS to make
changes to the buffer model to eliminate the custom types, then my
proposed Boost.Buffers library could be updated to reflect those
changes. The beneficiaries will at all times be the users - they get
something that works today, and the possibility of something that will
work even better tomorrow.

One thing you might consider, is that span<T> does not have the
"buffer debugging" feature found in `boost::asio::const_buffer`:
  <https://github.com/boostorg/asio/blob/b002097359f246b7b1478775251dfb153ab3ff4b/include/boost/asio/buffer.hpp#L109>

I don't see a sane way to add that feature to span<> in a way that
doesn't burden everyone using span instead of just the people using it
for networking buffers.

The buffer debugging feature is really useful. Asynchronous
programming is already hard enough, having every additional advantage
helps.

> You chose, against my advice, to base Beast on the outdated and hard
> coded buffer sequence design in ASIO.

I'll say the same thing I said the last time you brought this up. If
you feel strongly about it, then you need to write a paper. Otherwise,
your advice to me effectively becomes "ignore existing and emerging
standards and invent a new, incompatible concept." That's way too much
risk for me to take on, and I have no evidence that my users want such
a thing. All the feedback I have received thus far cites compatibility
with Asio as a primary motivator for adoption of Beast. Ignoring this
seems...reckless.

> But for a standalone library, things are different. I would advise
> strongly against accepting a Boost.Buffers library unless it is based on
> span<T> and Ranges and is forward looking into C++ 20 design patterns,
> not backwards at C++ 98 era design patterns like ASIO's buffer sequences.

If you feel that span<T>, Ranges, and C++20 design patterns are
important then you should be enthusiastic about my Boost.Buffers
proposal, because it physically separates the buffer concepts from the
networking algorithms. It effectively "factors out the buffers from
Asio." In other words it completes the first step of achieving your
goal - it separates buffers from the rest of Asio where it can be
acted on independently.

Note that I am in favor of Paul's proposal to change the requirements
of ConvertibleToConstBuffer, in order to relax the dependency on a
concrete type. I believe this is in line with your goals as well.

> Regarding HTTPKit etc, I haven't looked into it much, but if I remember
> I had some issues with your implementation and design there too
> specifically the lack of use of zero copy views.

In the version of Beast that was accepted into Boost,
beast::http::basic_parser was already "zero-copy" (when presented with
input as a single contiguous buffer) and beast::http::serializer was
"zero-copy". They still are.

> As a standalone library where a much wider use case applies,
> I think I would need to set a much higher bar in any review.

So, I think my comments earlier apply here as well. The primary
consumers of beast::http::basic_parser and beast::http::serializer are
using Asio buffer concepts (Beast stream algorithms being the best
examples). Factoring out the buffer concepts from Asio into a new
library Boost.Buffers, and then factoring out the parser and
serializer from Beast into a new library Boost.HTTPKit which uses
Boost.Buffers without needing Boost.Asio as a dependency seems like a
very pragmatic way forward which gives stakeholders something they've
been asking for, doesn't reinvent established buffer concepts, doesn't
force Boost.Asio as a dependency, and yet is compatible with
Boost.Asio for the users that want it.

Thanks

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Boost.HTTPKit, a new library from the makers of Beast!

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
On Sun, Oct 8, 2017 at 5:38 PM, Vinícius dos Santos Oliveira
<[hidden email]> wrote:
> Now, moving on... given that you have __not__ answered how your parser's
> design[1] compares to the parser I've developed

I'll try to provide more clarity. `beast::basic_parser` is designed
with standardization in mind, as I intent to eventually propose Beast
for the standard library. Therefore, I have made the interface as
simple as possible, exposing only the minimum required necessary to
achieve these goals which the majority of users want:

* Read the header first, if desired
* Feed Asio style buffer sequences into the parser
* Set independent limits on the number of header and body octets
* Optional fine-grained receipt of chunked body data and metadata

The design of basic_parser (the use of CRTP in particular) is meant to
support the case where a user implements their own Fields container,
or if they want a bit more custom handling of the fields (for example,
to avoid storing them).

Like all design choices, tradeoffs are made. The details of parsing
are exposed only to the derived class. Complexities are hidden from
the public-facing interface of `basic_parser`. Implementing a stream
algorithm that operates on the parser is a straightforward process:

    template<
        class SyncReadStream, class DynamicBuffer,
        bool isRequest, class Derived>
    std::size_t read(SyncReadStream& stream, DynamicBuffer& buffer,
        basic_parser<isRequest, Derived>& parser, error_code& ec)
    {
        parser.eager(true);
        if(parser.is_done())
        {
            ec.assign(0, ec.category());
            return 0;
        }
        std::size_t bytes_transferred = 0;
        do {
            bytes_transferred += read_some(
                stream, buffer, parser, ec);
            if(ec)
                return bytes_transferred;
        } while(! parser.is_done());
        return bytes_transferred;
    }

> [1] at most it hides behind a “multiple people/stakeholders agree with me”
> __shield__

There's no hiding going on here. One can only measure the relative
success of designs based on the feedback from users. They opened
issues, and I addressed their use-cases, sometimes with a considerable
amount of iteration and back-and-forth as you have seen in the GitHub
issues quoted in the previous message.

Now, I don't know if the sampling of users that have participated in
Beast's design are representative of the entire C++ community.
However, I do know one thing - if GitHub stars are any measure of the
sample size of the participants, then Beast is off to a good start.
Here's a graph showing the number stars over time received by
boostorg/beast, tufao, and Boost.Http (the last 2 libraries having you
as the author I believe):

<http://www.timqian.com/star-history/#boostorg/beast&BoostGSoC14/boost.http&vinipsmaker/tufao>

(Note that the HTTP+WebSocket version of Beast was released in May 2016).

We need to be careful interpreting results like this of course so
perhaps we should look at different metrics. Here are the links to the
number of closed issues for Beast, tufao, and Boost.Http:

502 Closed issues in Beast:
<https://github.com/boostorg/beast/issues?q=is%3Aissue+is%3Aclosed>

38 Closed issues in tufao
<https://github.com/vinipsmaker/tufao/issues?q=is%3Aissue+is%3Aclosed>

13 Closed issues in Boost.HTTP, from 6 unique users not including the author
<https://github.com/BoostGSoC14/boost.http/issues?q=is%3Aissue+is%3Aclosed>

Again we have to be careful interpreting results like this. But it
sure looks like there is a lot of user participation in Beast. If
approval from a large number of stakeholders is not a compelling
design motivator then what is?

> This tutorial is full of “design implications” blocks where I take the time
> to dissect what it means to go with each choice.

Thus far, no one has asked for more fine grained access to incoming
HTTP tokens in the manner of `code::method` and
`code::request_target`. If this becomes something that users
consistently ask for, it can be done by changing the requirements for
the derived class for `basic_parser`. This way, details about HTTP
parsing which most people don't care about will not leak into the
beast::http:: namespace. Such a change would not affect existing
stream algorithms on parsers.

Thanks

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Boost.HTTPKit, a new library from the makers of Beast!

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
2017-10-04 15:20 GMT-03:00 Vinnie Falco <[hidden email]>:

> Now that is really unfair, and ignores the enormous amount
> of hours not just from me but from all of the other people
> who participated in the design of Beast


If anything, it was my response that was rushed. Sorry about that.

Now, moving on... given that you have __not__ answered how your parser's
design[1] compares to the parser I've developed, I wrote an “implementing
Boost.Beast parser interface” in my documentation:
https://vinipsmaker.github.io/asiohttpserver/#_implementing_
boost_beast_parser_interface

This tutorial is full of “design implications” blocks where I take the time
to dissect what it means to go with each choice.

(note that I invited
> you to be one of those participants).
>

Yes, just bad timing.

[1] at most it hides behind a “multiple people/stakeholders agree with me”
__shield__


--
Vinícius dos Santos Oliveira
https://vinipsmaker.github.io/

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Boost.HTTPKit, a new library from the makers of Beast!

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
2017-10-09 22:18 GMT-03:00 Vinnie Falco via Boost <[hidden email]>:

> I'll try to provide more clarity. `beast::basic_parser` is designed
> with standardization in mind, as I intent to eventually propose Beast
> for the standard library.


A standard library is a library where we cannot have the luxury to make
mistakes.

If you make a mistake, the API will carry the technical debt forever.

I have made the interface as [...]
> [...] tradeoffs are made. [...]
> [...] One can only measure the relative
> success of designs based on the feedback from users. They opened
> issues [...]
>

So... it solves the problem for N users... therefore it'll solve the
problem for N + Z users too? Do you see how this looks like to me? If I
take the guess that the problem is the inductive reasoning that I
criticized all along in the previous tutorial I've linked, would you think
that this would be a wild guess or a reasonable guess? Can you understand
why I see the situation as such?

However, I'll pretend that I've chosen the optimistic vision this time.
I'll pretend that you've started to grasp the critics and you're here
explaining how you've developed Boost.Beast parser (i.e. the approach used
to tackle development) to justify your design.

For a moment, let's ignore the “justify the design” and focus on the
approach to the development (because the way you presented is not a
discussion of design). What you're doing is applying a heuristic. I do know
and use this heuristic (directly and indirectly). But there are two points
that I want to add here.

The first point is: do not be a slave to a single one heuristic. I also
mentioned one heuristic in the tutorial I've linked previously, the Occam’s
razor https://vinipsmaker.github.io/asiohttpserver/#_footnote_8

When you say “we need to be careful interpreting results”... how does
someone who lacks the tools in the cognitive repertory will interpret these
results? Can you describe/use me your other tools? I'm not the one who will
praise you if you answer quickly. Take your time (people rarely listen to
this advice).

The second point is: here, you need to go *beyond* the heuristics. This is
a point that depends entirely on you. I cannot explain you how you go
beyond the heuristics. You've just got to do it.

I'll try a new approach here. Given some of the ideas weren't well received
by you, I took the liberty to convert the Tufão project that you've
mentioned to use the Boost.Beast parser:
https://github.com/vinipsmaker/tufao/commit/56e27d3b77d617ad1b4aea377f993592bc2c0d77

Would you say you like the new result better? Would you say that I've
misused your parser to favour my approach? How would you have done it in
this case? Would you go beyond and accept the idea that the spaghetti
effect is inherent to the callback-based approach of push parsers? Or maybe
would you say that the spaghetti effect is small and acceptable here?

What do you think about the following links?


   - https://github.com/google/pulldown-cmark
   -
   https://www.ncameron.org/blog/macros-and-syntax-extensions-and-compiler-plugins-where-are-we-at/
   - https://github.com/Marwes/combine


Let's try yet another way to approach the problem (a different perspective
again). We talked about heuristics and I begged you to go beyond. But there
is one vision/perspective of this problem that may help you. Reason +
logic: wouldn't you agree that `basic_parser<T>::eager(false)` is just a
hacky way to implement a pull parser? Why? Remember to pay attention to the
*why*. I'm not interested in the yes/no.

[...] no one has asked for [...] If this becomes something that users
> consistently ask for, it can be done by [...]


“if you do not give a try to enter in the general problem and insist
on a *myopic
vision*”
  — https://vinipsmaker.github.io/asiohttpserver/#_implementing_
boost_beast_parser_interface

Why do you only solve the problem that is immediately in front of you?

You're not serious about standardization (consult my comment on technical
debt right at the beginning of this email) if you're unwilling/afraid to
let go of this modus operandi. There is a bright mind hiding behind this
robot. Let us see it.

There is a talk that is pure design, “understanding parser combinators - a
deep dive”[1]. How do you use *any* of your currently presented
lens/perspectives to judge the ideas of this talk? “Look, the idea of
composability here is wrong because he hasn't filled a project with 100
issues at Github”. It's just pathetic. I'm not entering in this
leads-to-nowhere line of reasoning, so please just stop.

What would you use as an example of design discussion? The tutorial I've
linked previously[2] or the conversation we're at now?

How wrong was I... I just thought you were ignoring the ideas all along.
How wrong was I... You just don't understand the subject at hand.

“There are these two young fish swimming along, and they happen to meet an
older fish swimming the other way, who nods at them and says, "Morning,
boys, how's the water?" And the two young fish swim on for a bit, and then
eventually one of them looks over at the other and goes, "What the hell is
water?"”

You don't know water, do you? Nor inductive reasoning, nor myopic vision,
nor Occam’s razor and the list goes on...

It's so frustrating... you can't imagine. You just have no idea. Sorry
about the trouble caused so far. I'll meditate on this matter and try to
learn something out of this episode.

“A cat approaches a dog and says “Meow.” The dog looks confused. The cat
repeats, “Meow!” The dog still looks confused. The cat repeats, more
emphatically, “MEEOW!!!” Finally, the dog ventures, “Bow-wow?” The cat
stalks away indignantly, thinking “Dumb dog!””

Thank you for the useful research that you've done. I'll surely use it (and
learn from it).

[1] https://vimeo.com/171704565
[2] a tutorial that I've done in a rush and I'm not very proud of, but it
still touches design


--
Vinícius dos Santos Oliveira
https://vinipsmaker.github.io/

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Boost.HTTPKit, a new library from the makers of Beast!

Boost - Dev mailing list
On Thu, Oct 12, 2017 at 6:47 AM, Vinícius dos Santos Oliveira via Boost <
[hidden email]> wrote:

> 2017-10-09 22:18 GMT-03:00 Vinnie Falco via Boost <[hidden email]>:
>


> [...] spaghetti effect is inherent to the callback-based approach of push
> parsers?

Or maybe would you say that the spaghetti effect is small and acceptable
> here?
>

I've done quite of bit of XML processing in the past, using both PUSH (i.e.
SAX)
and PULL parsers, and also wrote my own JSON PUSH and PULL parsers, and
I *much* prefer PULL parsers. They are simpler to use, and lead to nicer
client code,
that's easier to read and follow. Not sure it's relevant to the discussion
here, but
just in case I thought I'd share that perspective. --DD

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Boost.HTTPKit, a new library from the makers of Beast!

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
On 12-10-17 06:47, Vinícius dos Santos Oliveira via Boost wrote:
> What do you think about the following links?

What is the relevance of the links? They're extremely broad and general.
If you are suggesting that the implementation of the parser interface
should use parser combinators/generators, for sure. That is not
necessarily an interface design concern.

What _specific_ interface design concerns do you have in mind when
linking these kind of general purpose libraries/approaches? Are you
proposing a parser combinator library for Boost or the standard? (Would
Spirit X3 fit the bill?).

On 12-10-17 06:47, Vinícius dos Santos Oliveira via Boost wrote:
> I took the liberty to convert the Tufão project that you've
> mentioned to use the Boost.Beast parser:
> https://github.com/vinipsmaker/tufao/commit/56e27d3b77d617ad1b4aea377f993592bc2c0d77

That is nice and tangible. Let's focus on concrete shortcomings,
relevant to the library interface.

Seth


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Boost.HTTPKit, a new library from the makers of Beast!

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
On Wed, Oct 11, 2017 at 9:47 PM, Vinícius dos Santos Oliveira
<[hidden email]> wrote:
> I took the liberty to convert the Tufão project that you've
> mentioned to use the Boost.Beast parser:
> https://github.com/vinipsmaker/tufao/commit/56e27d3b77d617ad1b4aea377f993592bc2c0d77
>
> Would you say you like the new result better?

It seems pretty reasonable to me.

> Would you say that I've misused your parser to favour my approach?
> How would you have done it in this case?

Misused? I don't think so. The only meaningful change I would make is
that I would have simply called basic_parser::is_keep_alive() instead
of re-implementing the logic for interpreting the Connection header.

> Would you go beyond and accept the idea that the spaghetti effect
> is inherent to the callback-based approach of push parsers?

This is where we are venturing into the world of opinion. It seems
like you have a general aversion to callbacks. But there is a reason
Beast's parser is written this way. Recognize that there are two
primary consumers of the parser:

1. Stream algorithms such as beast::http::read_some
2. Consumers of structured HTTP elements (e.g. fields)

The Beast design separates these concerns. Public member functions of
`basic_parser` provide the interface needed for stream algorithms,
while calls to the derived class provide the structured HTTP elements.
I don't think it is a good idea to combine these into one interface,
which you have done in your parser. The reason is that this
unnecessary coupling pointlessly complicates the writing of the stream
algorithm. Anyone who wants to write an algorithm to feed the parser
from some source of incoming bytes now has to care about tokens. This
is evident from your documentation:

<http://boostgsoc14.github.io/boost.http/#parsing_tutorial1>

In your example you declare a class `my_socket_consumer`. It has a
single function `on_socket_callback` which is called repeatedly with
incoming data. Not shown in your example is the stream algorithm (the
function which interacts with the actual socket to retrieve the data).
However, we know that this stream algorithm must be aware of the
concrete type `my_socket_consumer` and that it needs to call
`on_socket_callback` with an `asio::buffer`. A signature for this
stream algorithm might look like this:

    template<class SyncReadStream>
    void read(SyncReadStream& stream, my_socket_consumer& consumer);

Observe that this stream algorithm can only ever work with that
specific consumer type. In your example, `my_socket_consumer` handles
HTTP requests. Therefore, this stream algorithm can now only handle
HTTP requests. In order to receive a response, a new stream algorithm
must be written. Compare this to the equivalent signature of a Beast
styled stream algorithm:

    template<class SyncReadStream, bool isRequest, class Derived>
    void read(SyncReadStream& stream, basic_parser<isRequest, Derived>& parser);

This allows an author to create a stream algorithm which works not
just for requests which store their data as data members in a class
(`my_socket_consumer`) but for any parser, thanks to the CRTP design.
For example, if I create a parser by subclassing
`beast::http::basic_parser` with an implementation that discards
headers I don't care about, then it will work with the stream
algorithm described above without requiring modification to that
algorithm. It is interesting to note that your `my_socket_consumer` is
roughly equivalent to the beast::http::parser class (which is derived
from beast::http::basic_parser):

<https://github.com/boostorg/beast/blob/f09b2d3e1c9d383e5d0f57b1bf889568cf27c39f/include/boost/beast/http/parser.hpp#L45>

Both of these classes store incoming structured HTTP elements in a
container for holding HTTP message data. However note that unlike
`beast::http::parser`, `my_socket_consumer` also has to know about
buffers:

    void on_socket_callback(asio::buffer data)
    {
        ....
        buffer.push_back(data);
        request_reader.set_buffer(buffer);

It might not be evident to casual readers but the implementation of
`my_socket_consumer` has to know that the parser needs the serialized
version of the message to be entirely contained in a single contiguous
buffer. In my opinion this is a design flaw because it does not
enforce a separation of concerns. The handling of structured HTTP
elements should not concern itself with the need to assemble the
incoming message into a single contiguous buffer; that responsibility
lies with the stream algorithm.

The design decision in Beast is to keep the interfaces used by stream
algorithms separate from the interface used by consumers of HTTP
tokens. Furthermore the design creates a standard interface so that
stream algorithms can work with any instance of `basic_parser`,
including both requests and responses, and for any user-defined
derived class.

Thanks

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Push/pull parsers & coroutines (Was: Boost.HTTPKit, a new library from the makers of Beast!)

Boost - Dev mailing list
Dear All,

This is related to the ongoing discussion of the Beast HTTP parser.  
I have been thinking in general about how best to implement parser
APIs in modern and future C++.  Specifically, I've been wondering
whether the imminent arrival of low-overhead coroutines ought to
change best practice for this sort of interface.

In the past, I have found that there is a trade-off between parser
implementation complexity and client code complexity.  A "push" parser,
which invokes client callbacks as tokens are processed, is easier to
implement but harder to use as the client has to track its state
between callbacks with e.g. an explicit FSM.  On the other hand, a
"pull parser" (possibly using an iterator interface) is easier for
the client but instead now the parser may need the explicit state
tracking.

Now, with stackless coroutines due "real soon now", we can avoid
needing explicit state on either side.  In the parser we can
co_yield tokens as they are processed and in the client we can
consume them using input iterators.  The use of co-routines doesn't
need to be explicit in the API; the parser can be said to return a
range<T>, and then return a generator<T>.

Here's a very very rough sketch of what I have in mind, for the case
of HTTP header parsing; note that I don't even have a compiler that
supports coroutines yet so this is far from real code:

generator<char> read_input(int fd)
{
  char buf[4096];
  while (1) {
    int r = ::read(fd,buf,4096);
    if (r == 0) return;
    for (int i = 0; i < r; ++i) {
      co_yield buf[i];
    }
  }
}

template <typename INPUT_RANGE>
generator< pair<string,string> > parse_header_lines(INPUT_RANGE input)
{
  typedef INPUT_RANGE::const_iterator iter_t;
  iter_t i = input.begin(), e = input.end();
  while (i != e) {
    iter_t j = std::find(i,e,':');
    string k(i,j);
    // (That's broken, as iter_t is a single-pass input iterator. We
    // need to copy to the string and check for ':' at the same time.
    // It's trivial with a loop.)
    ++j;
    iter_t k = std::find(j,e,'\n');
    string v(j,k);
    ++k;
    i = k;
    co_yield pair(k,v);
  }
}

void parse_http_headers(int fd)
{
  map<string,string> headers;
  auto g = parse_header_lines( read_input(fd) );
  for (auto h: g) {
    headers.insert(h);
  }
}

An "exercise for the reader" is to extend that to something that will
parse headers followed by a body.

Questions: how efficient is this in practice?  Is this really simpler to
write than a non-coroutine version?  Will all of our code use this style
in the (near?) future?  How should we be writing code now so that it is
compatible with this style in the future?

Thanks for reading,


Phil.



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Push/pull parsers & coroutines (Was: Boost.HTTPKit, a new library from the makers of Beast!)

Boost - Dev mailing list
On Fri, Oct 13, 2017 at 11:59 AM, Phil Endecott via Boost
<[hidden email]> wrote:
> Dear All,
> A "push" parser,
> which invokes client callbacks as tokens are processed, is easier to
> implement but harder to use as the client has to track its state
> between callbacks with e.g. an explicit FSM.  On the other hand, a
> "pull parser" (possibly using an iterator interface) is easier for
> the client but instead now the parser may need the explicit state
> tracking.

That is generally true, and especially true for XML and other
languages that have a similar structure. Specifically, that there are
opening and closing tags which determine the validity of subsequent
grammar, and have a recursive structure (like HTML).

But this is not the case for HTTP. There are no opening and closing
tags. There is no need to keep a "stack" of "open tags". It is quite
straightforward. Therefore, when designing an HTTP parser we can place
less emphasis on the style of parser and instead focus those energies
to other considerations (as I described in my previous post, regarding
the separation of concerns for stream algorithms and parser
consumers).

If you look at the Beast parser derived class, you can see that the
state is quite minimal:

    template<bool isRequest, class Body, class Allocator>
    class parser
        : public basic_parser<isRequest, parser<isRequest, Body, Allocator>>
    {
        message<isRequest, Body, basic_fields<Allocator>> m_;
        typename Body::writer wr_;
        bool wr_inited_ = false;
        std::function<...> cb_h_; // for manual chunking
        std::function<...> cb_b_; // for manual chunking
        ...

<https://github.com/boostorg/beast/blob/f09b2d3e1c9d383e5d0f57b1bf889568cf27c39f/include/boost/beast/http/parser.hpp#L45>

Callbacks don't need to store state used by subsequent callbacks to
interpret the incoming structured HTTP data, because HTTP is simple
compared to XML or HTML.

> Here's a very very rough sketch of what I have in mind, for the case
> of HTTP header parsing; note that I don't even have a compiler that
> supports coroutines yet so this is far from real code:

I think it is great that you're providing an example but you have
chosen the most simple, regular part of HTTP which is the headers. I
suspect that if you try to use the iterator model for the start-line
(which is different for requests and responses) and then try to
express the message body using iterators you will run into
considerable difficulty coming up with a design that is elegant and
feature-rich. Especially when you consider the need to transform the
chunk-encoding while providing the metadata to the caller. I know this
because I went through many iterations before settling on what is in
Beast currently.

Thanks

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Push/pull parsers & coroutines (Was: Boost.HTTPKit, a new library from the makers of Beast!)

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
On 13-10-17 20:59, Phil Endecott via Boost wrote:
> Specifically, I've been wondering
> whether the imminent arrival of low-overhead coroutines ought to
> change best practice for this sort of interface.

That's nice, but it can't inform the design of a library that exists
now. Of course, the interface would be best served if it didn't exclude
better¹ options in the future.

¹ coroutines are not zero cost

On 13-10-17 20:59, Phil Endecott via Boost wrote:
> Now, with stackless coroutines due "real soon now", we can avoid
> needing explicit state on either side.

Coros have explicit state but with syntactic sugar. The syntactic sugar
in this case has runtime overhead.

> Questions: how efficient is this in practice?  

In practice it should be profiled, but it _will_ have overhead.

> Is this really simpler to write than a non-coroutine version?
>
In all but the most trivial cases I think it's simpler. To write.

> Will all of our code use this style in the (near?) future?
>
Will all of our code use this style: Most definitely not (because then
we'd not be using C++, the language that exists to eliminate overhead)

> How should we be writing code now so that it is
> compatible with this style in the future?
This is the most relevant question. I applaud it being asked. I don't
have the answer yet.
Slightly related, in my book, may be the way in which Boost Asio caters
for different async patterns (yield_context, use_future or direct
handlers). Asio coded the logic into the async_result customization
point.
(http://www.boost.org/doc/libs/1_65_1/doc/html/boost_asio/reference/async_result.html)

I suppose we could learn by assimilating a device like that.

Seth


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Push/pull parsers & coroutines (Was: Boost.HTTPKit, a new library from the makers of Beast!)

Boost - Dev mailing list
On Sat, Oct 14, 2017 at 8:03 AM, Seth via Boost <[hidden email]> wrote:
> ¹ coroutines are not zero cost

That depends. I've done some investigation into the Coroutines TS
described in n4134. For coroutines whose scope is strictly limited to
the calling function, they can be implemented with zero cost (no
dynamic allocation and comparable assembly output). The expository
code that Phil provided certainly falls into that category.

Thanks

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Push/pull parsers & coroutines

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
Seth wrote:
> coroutines are not zero cost

In some cases they can have negative cost.  See Gor
Nishanov's CppCon 2015 presentation, "C++ Coroutines - a
negative overhead abstraction".

With coroutines, the state is essentially a program counter
value which can be saved and restored with similar cost to a
function call or return.  When the alternative is something
like a state enum and a switch statement, the coroutine is
going to win.


Regards, Phil.



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Push/pull parsers & coroutines

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
Vinnie Falco wrote:

> On Fri, Oct 13, 2017 at 11:59 AM, Phil Endecott via Boost
> <[hidden email]> wrote:
>> A "push" parser,
>> which invokes client callbacks as tokens are processed, is easier to
>> implement but harder to use as the client has to track its state
>> between callbacks with e.g. an explicit FSM.  On the other hand, a
>> "pull parser" (possibly using an iterator interface) is easier for
>> the client but instead now the parser may need the explicit state
>> tracking.
>
> That is generally true, and especially true for XML and other
> languages that have a similar structure. Specifically, that there are
> opening and closing tags which determine the validity of subsequent
> grammar, and have a recursive structure (like HTML).
>
> But this is not the case for HTTP. There are no opening and closing
> tags. There is no need to keep a "stack" of "open tags". It is quite
> straightforward. Therefore, when designing an HTTP parser we can place
> less emphasis on the style of parser and instead focus those energies
> to other considerations (as I described in my previous post, regarding
> the separation of concerns for stream algorithms and parser
> consumers).
>
> If you look at the Beast parser derived class, you can see that the
> state is quite minimal:
>
>     template<bool isRequest, class Body, class Allocator>
>     class parser
>         : public basic_parser<isRequest, parser<isRequest, Body, Allocator>>
>     {
>         message<isRequest, Body, basic_fields<Allocator>> m_;
>         typename Body::writer wr_;
>         bool wr_inited_ = false;
>         std::function<...> cb_h_; // for manual chunking
>         std::function<...> cb_b_; // for manual chunking
>         ...

You still have an explicit state machine, i.e. a state enum and a overview.html
switch statement in a loop; I'm looking at impl/basic_parser.ipp for
example.

But I don't want to dwell on this particular code.  I'm just considering,
generally, whether this style of code is soon going to look "antique" -
in the way that 15-year-old code full of explicit new and delete looks
antediluvian now that we're all using smart pointers.

I think it's clear that often coroutines can make the code simpler to
write and/or easier to use.  The question is what do we lose.  The
issue of generator<T> providing only input iterators is the most
significant issue I've spotted so far.  This is in some way related
to the whole ASIO "buffer sequence" thing; the code I posted before
read into contiguous buffers, but that was lost before the downstream
code saw it, so it couldn't hope to optimise with e.g. word-sized
copies or compares.  Maybe this could be fixed with some sort of segmented
iterator, or something other than generator<T> as the coroutine type,
or something.  Or maybe it's unfixable.

Do other languages have anything to teach us about this?  What do
users of Boost.Coroutine think?


Regards, Phil.





_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Push/pull parsers & coroutines

Boost - Dev mailing list
On Sat, Oct 14, 2017 at 12:03 PM, Phil Endecott via Boost
<[hidden email]> wrote:
> The
> issue of generator<T> providing only input iterators is the most
> significant issue I've spotted so far.  This is in some way related
> to the whole ASIO "buffer sequence" thing; the code I posted before
> read into contiguous buffers, but that was lost before the downstream
> code saw it, so it couldn't hope to optimise with e.g. word-sized
> copies or compares.

Buffer sequences are not the problem, it is that parsed HTTP data
types are heterogeneous. For example, the series of types generated
when parsing a request looks like this:

1. std::pair<verb, string>: verb enum (if known) and method string
2. string: request-target string
3. integer (HTTP-version)
4. vector<tuple<field, string, string>>: field name enum (if known), name, value
5. vector<string>: body data
OR
5. vector<string, string>: body data plus chunk-extension

An interface which presents parsed data through a function return
value (for example, an iterator's operator*) is only capable of
yielding one type. The only way to use the same control flow and
produce different types is to do two things: inform the caller of the
type of the next incoming object, and then provide a set of functions
from which the caller chooses the correct one with the proper matching
return type for receiving the next value.

You can see this in the Boost.Http parser calling code:

        do {
            request_reader.next();
            switch (request_reader.code()) {
            case code::skip:
                // do nothing
                break;
            case code::method:
                method = request_reader.value<token::method>();
                break;
            case code::request_target:
                request_target = request_reader.value<token::request_target>();
                break;
            case code::version:
                version = request_reader.value<token::version>();
                break;
            case code::field_name:
                last_header = request_reader.value<token::field_name>();
            }
        } while(request_reader.code() != code::end_of_message);

A viable alternative, which does not preserve the same structure of
calling code, is to use a type of "visitor". The parser calls a user
defined function specific to the next anticipated token, whose
argument list has the correct types. This is the approach used in
Beast. The parser calls a particular member function of the derived
class depending on what structured element was parsed. The arguments
to the member function have the correct high level types.

For example, when Beast parses the request-line it invokes a member
function with this signature in the derived class:

    /// Called after receiving the request-line (isRequest == true).
    void
    on_request_impl(
        verb method,                // The method verb, verb::unknown
if no match
        string_view method_str,     // The method as a string
        string_view target,         // The request-target
        int version,                // The HTTP-version
        error_code& ec);            // The error returned to the caller, if any

Note the rich variety of types: `verb` is an enumeration of known HTTP methods:

<http://www.boost.org/doc/libs/master/libs/beast/doc/html/beast/ref/boost__beast__http__verb.html>

`method_str` is the exact method string extracted by the parser. This
is needed when the method does not match one of the method strings
known to the library, indicated by the enumeration value
`verb::unknown`.

`target` is a straightforward string, while `version` is conveyed as an integer.

Since the parser owns the control flow at the time the member function
is called, the `ec` output parameter allows the callee to indicate
that it wishes to break out of the parser's loop and return control to
the calling function.

After the request-line comes zero or more calls to a member function
with field name/value pairs. That member function signature looks like
this:

    /// Called after receiving a header field.
    void
    on_field_impl(
        field f,                    // The known-field enumeration constant
        string_view name,           // The field name string.
        string_view value,          // The field value
        error_code& ec);            // The error returned to the caller, if any

Note how the collection of types presented for a header field is
different from the request-line. Expressing this irregular stream of
different types through an iterator interface is going to be very
clumsy. Furthermore, there is metadata generated during the parse
which is not easily reflected in an iterator interface.

For example, after the HTTP headers have been parsed, Beast calculates
the "keep-alive" semantic as well as the disposition of the
Content-Length, which may be in three states: body-to-eof, chunked, or
known. The keep-alive semantics are communicated to the caller of the
parser through a member function `basic_parser::is_keep_alive`:

<http://www.boost.org/doc/libs/master/libs/beast/doc/html/beast/ref/boost__beast__http__basic_parser/is_keep_alive.html>

I described in a previous post how Beast's parser exposes two
interfaces. The public interface is consumed by stream algorithms
(e.g. read_some, async_read_some) while the derived class interface is
used to store structure HTTP elements. The function `is_keep_alive` is
exposed through the public interface of the parser because it is
primarily of interest to the stream algorithm, since the stream
algorithm concerns itself with the connection and whether or not it
should be closed afterwards.

Meanwhile, the Content-Length disposition is exposed to the derived
class since it is a piece of metadata of interest to the algorithm
which stores the body in the message container. It is communicated by
the parser through a call to this derived class member:

    /// Called just before processing the body, if a body exists.
    void
    on_body_init_impl(
        boost::optional<
            std::uint64_t> const&
                content_length,     // Content length if known, else
`boost::none`
        error_code& ec);            // The error returned to the caller, if any

There is so much type irregularity in the information presented during
the parse that I feel an iterator based approach would be, to use
informal terms, "quite ugly."

Thanks

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: Push/pull parsers & coroutines

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
On 14-10-17 20:04, Phil Endecott via Boost wrote:
> In some cases they can have negative cost.  See Gor
> Nishanov's CppCon 2015 presentation, "C++ Coroutines - a
> negative overhead abstraction".

I'm sorry I assumed from experience with Boost Coroutine only. This is
indeed fantastic stuff and I had seen that particular vid. Thanks for
correcting my memory,


Seth


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost