Re: [Boost-users] Interest in a Unicode library for Boost?
On Sat, 26 Oct 2019 at 03:11, Zach Laine via Boost-users <
[hidden email]> wrote:
> About 14 months ago I posted the same thing. There was significant work
> that needed to be done to Boost.Text (the proposed library), and I was a
> bit burned out.
> Now I've managed to make the necessary changes, and I feel the library is
> ready for review, if there is interest.
> This library, in part, is something I want to standardize.
> It started as a better string library for namespace "std2", with minimal
> Unicode support. Though "std2" will almost certainly never happen now,
> those string types are still in there, and the library has grown to also
> include all the Unicode features most users will ever need.
> Github: https://github.com/tzlaine/text > Online docs: https://tzlaine.github.io/text >
> If you care about portable Unicode support, or even addressing the
> embarrassment of being the only major production language with next to no
> Unicode support, please have a look and provide feedback.
> I gave a talk about this at C++Now in May 2018, and now it's a bit out of
> date, as the library was not then finished. It's three hours, so, y'know,
> maybe skip it. For completeness' sake:
> https://www.youtube.com/watch?v=944GjKxwMBo&index=7&list=PL_AKIMJc4roVSbTTfHReQTl1dc9ms0lWH >
> https://www.youtube.com/watch?v=GJ2xMAqCZL8&list=PL_AKIMJc4roVSbTTfHReQTl1dc9ms0lWH&index=8 >
(as a power user)
I would be interested to have such library in Boost and already had plan to
try Boost.Text in my next C++ project with text.
I am following the discussions happening in SG16 and understand that there
are some differences with the parts that will be proposed for
standardisation (as ThePHD explains in his talk).
Though honestly both approaches seems to solve my problems so I'm open to
trying both. If boost.text is stable today, I'm happy to use it (at least
to replace ICU and have a proper unicode text type).
> On 29.10.2019 17:11, Zach Laine wrote:
>> - for the sake of completeness the normalization type used at the text
>> level ought to be a policy parameter; although I do understand your
>> arguments against it I think it should be there even at the cost of
>> different text types being inoperable without conversions
> I disagree. Policy parameters are bad for reasoning. If I see a
> text::text, as things currently stand, I know that it is stored as a
> contiguous array of UTF-8, and that it is normalized FCC. If I add a
> template parameter to control the normalization, I change the invariants of
> the type. Types with different invariants should have different names. To
> do otherwise is a violation of the single responsibility principle.
> Okay, the policy or not the policy was not my point ... it was to allow
> for different underlying normalizations. Granted, it may only be important
> to (a few) corner cases where input and/or output normalizations are given,
> and your assessment that it may not be worth the effort is reasonable ...
> unless you are aiming towards adding to the standard. Then the completeness
> imho becomes more important.
> Frankly, I'm not proficient enough in the meta-programming to make a
> strong case either for policy parameter or for explicit types/templates. I
> just happen to prefer the policy based approach.
Understood. FWIW, the algorithms provided by Boost.Text make it possible
to use any normalization representation, though at times conversions may be
necessary. Some of those conversions are mandated by the Unicode standard
itself -- you cannot feed NFC, NFKC, or NFKC to the collation algorithm,
for instance (though implementations are possible for NFD and FCC).
> - at the text level I'm not sure I'm willing to cope with different
>> fundamental text types; I just want to use boost::text::text, pretty
>> much the same as I use std::string as an alias to much more complex
>> class template; heck, even at the string layer I'd probably prefer
>> rope/contiguous concept to be a policy parameter to the same type
> That would be like adding a template parameter to std::vector that makes
> it act like a std::deque for certain values of that parameter. Changing
> the space and time complexity of a type by changing a template parameter is
> the wrong answer.
> No, that is not making the std::vector to act as std::deque - the text
> would still remain the text and act as a text, with the same interface.
> It's more like FIFO implementation using either std::vector or std::dequeu
> for its store - since in both cases the FIFO has the same interface and
> functionally behaves the same, I really don't want two distinct types. The
> type template with the parameter that makes the choice between the
> underlying storage seems much more natural to me.
You example highlights my point. For N inputs to your FIFO queue, a deque
underlying implementation is worst-case O(N). A vector implementation is
worst-case O(N*N). The invariants of the type matter, and they matter a
lot. Saying a foo may be like a bar or like a baz only work when bar and
baz are so similar that you cannot observe a difference in the behavior of
On Wed, Oct 30, 2019 at 7:59 AM Klaim - Joël Lamotte <[hidden email]>
> On Sat, 26 Oct 2019 at 03:11, Zach Laine via Boost-users <
> [hidden email]> wrote:
>> About 14 months ago I posted the same thing. There was significant work
>> that needed to be done to Boost.Text (the proposed library), and I was a
>> bit burned out.
>> Now I've managed to make the necessary changes, and I feel the library is
>> ready for review, if there is interest.
>> This library, in part, is something I want to standardize.
> (as a power user)
> I would be interested to have such library in Boost and already had plan
> to try Boost.Text in my next C++ project with text.
> I am following the discussions happening in SG16 and understand that there
> are some differences with the parts that will be proposed for
> standardisation (as ThePHD explains in his talk).
> Though honestly both approaches seems to solve my problems so I'm open to
> trying both. If boost.text is stable today, I'm happy to use it (at least
> to replace ICU and have a proper unicode text type).
Yes, JeanHeyd and I started with very different approaches, but we're
> On Fri, Nov 1, 2019 at 6:35 AM Mathias Gaunard <
> [hidden email]> wrote:
>> On Sat, 26 Oct 2019 at 02:11, Zach Laine via Boost-users
>> <[hidden email]> wrote:
>> > About 14 months ago I posted the same thing. There was significant
>> work that needed to be done to Boost.Text (the proposed library), and I was
>> a bit burned out.
>> > Now I've managed to make the necessary changes, and I feel the library
>> is ready for review, if there is interest.
>> > This library, in part, is something I want to standardize.
>> > It started as a better string library for namespace "std2", with
>> minimal Unicode support. Though "std2" will almost certainly never happen
>> now, those string types are still in there, and the library has grown to
>> also include all the Unicode features most users will ever need.
>> > Github: https://github.com/tzlaine/text >> > Online docs: https://tzlaine.github.io/text >>
>> I would start by removing the superlative statements about Unicode
>> being "hard" or "crazy".
>> It's not that complicated compared to the actual hard problems that
>> software engineers solve everyday. The only thing is that people
>> misunderstand what the scope of Unicode is, it's not just an encoding,
>> it's a a database and a set of algorithms (relying on said database)
>> to facilitate natural text processing of arbitrary scripts, and does
>> compromises to integrate with existing industry practices prior to all
>> those scripts being brought together under the same umbrella.
> Right. Unicode encodes all natural languages that anyone has taken the
> time to put into Unicode. I stand by the implication that natural
> languages are crazy.
>> Now the string/container/memory management, this is quite irrelevant.
>> That sort of stuff has nothing to do with Unicode and I certainly do
>> not want some Unicode library to mess with the way I am organizing how
>> my data is stored in memory.
>> Your rope etc. containers belong in a completely independent library.
> So then maybe don't use those parts? They're independent; you don't have
> to use them to use the Unicode algorithms.
>> What's important is providing an efficient Unicode character database,
>> and implementing the algorithms in a way that is generic, working for
>> arbitrary ranges and being able to be lazily evaluated (i.e. range
>> I already did all that work more than 10 years ago as a two-month GSoC
>> project, though there are some limitations since at that time ranges
>> and ranges adaptors were still fairly new ideas for C++. It does
>> however provide a generic framework to define arbitrary algorithms
>> that can be evaluated either lazily or eagerly.
> Clearly you are more capable than I am. It took me a lot longer to do
> than 2 months. Why did you never submit this for a Boost review? You were
> thinking about it, ~10 years ago, but you never did....
>> To be honest I can't say I find your library to be much of an
>> improvement, at least in terms of usability, since the programming
>> interface seems more constrained (why don't things work with arbitrary
>> ranges rather than this "text" containers)
> They do, of course. I'm not sure why it is you think otherwise.
>> and verbose (just look at
>> the code to do transcoding with iterators),
> Are you referring to the verbosity of:
> char const * some_utf8 = /* ... */ ;
> out = std::ranges::copy(boost::text::as_utf32(some_utf8), out);
> , or:
> out = boost::text::transcode_utf_8_to_32(utf8_first, utf8_last, out);
> , or something else?
>> the set of features is
>> quite small,
> That is quite intentional. I want to standardize *basic* Unicode
> support. I feel that what I have in Boost.Text is the basic set that users
> will need, just to support languages or formatting conventions that are not
> common in their favorite environment. For instance, today there is no
> standard way of taking UTF-8 and turning it into UTF-16, or vice versa;
> this library is intended to work at that level. That is, it is intended to
> fill in needless gaps in Unicode support that exist in C++ -- gaps that no
> other major language besides C has. It is specifically not intended to
> replace all ICU functionality. Do you have specific things in mind that
> you think ~90% of Unicode-aware C++ users will need? Note that I did not
> say 100%.
>> and that the database itself is not even accessible,
> That's also intentional. Another goal of the library is to make Unicode
> as simple as possible for naive users who just want to do the basics. If I
> find requests for any new feature that has a compelling use case, I'll add
>> last I remember your implementation was ridiculously bloated in size.
> I don't consider 1.5MB for a database containing all human languages in
> widespread use on computers to be a ridiculous size, but YMMV.
>> It also doesn't provide the ability to do fast substring search, which
>> you'd typically do by searching for a substring at the character
>> encoding level and then eliminating matches that do not fall on a
>> satisfying boundary, instead suggesting to do the search at the
>> grapheme level which is much slower, and the facility to test for
>> boundary isn't provided anyway.
> I honestly don't know what you mean here. If you use the text::text or
> text::string types, those are just contiguous sequences of bits, like a
> std::vector or std::string. text::text exposes iterators to those bits
> which can be used to get grapheme, code point, and/or UTF-8 byte views of
> the underlying data. If you are using something else besides text::text or
> text::string two types, you presumably have access to your own bits in your
> own representation. What prevents you from doing whatever substring search
> you like, via std::search(), std::ranges::includes(), or something else?
> Boost.Text is not intended as a string algorithms library.
> I'm pretty sure I made similar comments in the past, but I don't feel
>> like any of them has been addressed.
> I think you're referring to this email you sent in the Boost.Text
> interest thread from 14 months ago:
> The Unicode library I did as a SoC project in 2009 was significantly
> smaller than that and if I recall correctly it has more data than the one
> in your library.
> Clearly some work can be done here to better optimize the database size.
> I did make it a bit smaller. The other comments are new.
> On Fri, Nov 1, 2019 at 3:41 PM Mathias Gaunard <
> [hidden email]> wrote:
>> To search for the utf-8 substring "foo" in the utf-8 string "I really
> like foo dogs", there is no need to iterate the string per code point
>> or per grapheme as you do in your examples. You can just perform the
>> search at the code unit level, then check that the position before and
>> after the match does not lie inside a grapheme cluster, i.e. they are
>> on a valid boundary.
>> What you need to be able to do that is a function that tells you
>> whether an arbitrary position in your sequence of utf-8 code units
>> lies at a grapheme cluster boundary or not (which would probably be a
>> composition of two separate functions, one that test whether the code
>> unit is on a code point boundary, and one that tests whether the code
>> point is on a grapheme cluster boundary). This functionality is not
>> This sort of thing is briefly touched upon in Unicode TR#29 6.4.
> I see. This seems like it might be really useful to add. I'll open a
> ticket for it on Github.
After writing this, I realized this is supported by calling
prev_grapheme_break(first, it, last) == it. There is an exception to this,
though, when it == last. I should either remove that exception (which
sounds like the right answer regardless of the rest), or provide
at_grapheme_break(first, it, last) (probably a good thing to do regardless
of the rest).