[safe_numerics] Formal review starts today

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|

[safe_numerics] Formal review starts today

Boost - Dev mailing list
Hi Everyone,
Today, on March 2nd (or tomorrow, depending on your timezone), is the start
of the formal review of safe_numerics library by Robert Ramey. The review
will run till March 11th. I will be serving as review manager.

safe_numerics is a set of drop-in replacements for the built-in integer
types, which make the guarantee that the results of operations are either
correct according to the rules of integer arithmetic or report a
diagnosable error (rather than offering modulo arithmetic results, or
resulting in undefined behavior, or returning incorrect results as in the
case of signed vs. unsigned comparisons).

The library can be found on GitHub:
https://github.com/robertramey/safe_numerics/

Online docs:
http://htmlpreview.github.io/?https://github.com/robertramey/safe_numerics/
master/doc/html/index.html

Formal Review comments can be posted publicly on the mailing list or Boost
Library Incubator <http://blincubator.com>, or sent privately to the review
manager, that is myself.

Here are some questions you might want to answer in your review:

   - What is your evaluation of the design?
   - What is your evaluation of the implementation?
   - What is your evaluation of the documentation?
   - What is your evaluation of the potential usefulness of the library?
   - Did you try to use the library? With what compiler? Did you have any
   problems?
   - How much effort did you put into your evaluation? A glance? A quick
   reading? In-depth study?
   - Are you knowledgeable about the problem domain?

And most importantly:

Do you think the library should be accepted as a Boost library?

For more information about Boost Formal Review Process, see:
http://www.boost.org/community/reviews.html

Your review is needed!

Regards,
Andrzej Krzemieński, review manager.

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Formal review starts today

Boost - Dev mailing list
On 3/1/17 3:36 PM, Andrzej Krzemienski via Boost wrote:

Andrzej - why don't we post this on reddit/r/cpp as well?

Robert Ramey

> Andrzej Krzemieński, review manager.
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Formal review starts today

Boost - Dev mailing list
Ok, here it goes:
https://www.reddit.com/r/cpp/comments/5x1z67/boostsafe_numerics_formal_review_starts_today/


2017-03-02 1:48 GMT+01:00 Robert Ramey via Boost <[hidden email]>:

> On 3/1/17 3:36 PM, Andrzej Krzemienski via Boost wrote:
>
> Andrzej - why don't we post this on reddit/r/cpp as well?
>
> Robert Ramey
>
> Andrzej Krzemieński, review manager.
>>
>> _______________________________________________
>> Unsubscribe & other changes: http://lists.boost.org/mailman
>> /listinfo.cgi/boost
>>
>>
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman
> /listinfo.cgi/boost

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Review Part 1 (Documentation)

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
AMDG

All comments are as of 3bcfabe1cafb8ef6edf08d3bd8730a1555b1c45d

General notes:

You should have a root index.html that redirects
to the top level documentation page.

There are two copies of proposal.pdf in the
repository.

safe_numerics.pdf seems to be out-dated.

VS2015: FAIL (as expected)

directory structure is not quite boost ready.
(needs include/boost/numeric/ instead of just include/)

The include paths used in the documentation are a mess.
I see at least 6 different variations:
- boost/safe_numerics/include/
- boost/safe_numerics/
- safe/numeric/
- boost/numeric/
- safe_numerics/include/
- ../include
- ../../include

In boostbook, <code language="c++"> enables C++
syntax highlighting for code snippets.  (This is
enabled by default for programlisting, but <code>
is used for non-C++ content enough that it needs
to be specified.)

Introduction:

"Since the problems and their solution are similar, We'll..."
"We" should not be capitalized.

"...the results of these operations are guaranteed to be
either arithmetically correct or invoke an error"
"either ... or ..." here has a noun on one side and
a verb on the other.  Try "... either to be ... or to invoke ..."

"#include <boost/safe_numeric/safe_integer.hpp>"
I believe that this should be "safe_numerics"
(with an "s")

"Enforce of other program requirements using ranged integer types."
Delete "of"

"The library includes the types safe_range(Min, Max) and safe_literal(N)"
These don't look like any C++ type I've ever seen unless
they're macros.

tutorial/1.html

"When this occurs, Most"
lowercase "Most"

"When this occurs, Most C++ compilers will continue to execute with no
indication that the results are wrong"
Is it the compiler that continues to execute or is it the
program built by the compiler?

"        // converts variables to int before evaluating he expression!"
s/he/the/

"The solution is to replace instances of char type with safe<char> type"
There are no char's in this example.

tutorial/2.html:

"This is a common problem with for loops"
Is this really true?  The most common pattern
of integer for loops is for(i=0;i<n;++i)
which can only overflow when n is a wider
type than i.  I usually see this with int/size_t,
but that will get a signed/unsigned comparison
warning on some compilers.

tutorial/5.html:

Creating the bounds type is also somewhat error-prone.
It would be quite easy for C++ programmers who are used
to half-open ranges to use safe_unsigned_range<0, i_array.size()>

tutorial/7.html
        std::cout << x / y;
        std::cout << " error detected!" << std::endl;
Should this be "NOT detected"?

eliminate_runtime_penalty.html:

The name trap_exception doesn't convey the meaning
of causing a compile-time error to me.  To me the term
'trap' implies a signal handler.

eliminate_runtime_penalty/1.html:

The links for native, automatic, and trap_exception are broken.


eliminate_runtime_penalty/3.html
    throw_exception // these variables need to
need to what?

integer.html:

"std::numeric_limits<T>.is_integer"
should be "::is_integer"

safe_numeric_concept.html:

"However, it would be very surprising if any implementation were to be
more complex that O(0);"
I think you mean O(1).  O(0) means that it takes no time at all.

"... all C++ operations permitted on it's base type..."
s/it's/its/

"( throw exception, compile time trap, etc..)"
Delete the leading space.

promotion_policy.html:

"auto sz = sx + y; // promotes expression type to a safe<long int> which
requires no result checking
is guaranteed not to overflow."
The line wrapping ends up uncommented.

What happens for binary operators when the arguments
have different promotion policies?  Is it an error?
Does the implementation randomly pick one?  Is there
some kind of precedence rule for promotion policies?
(Note: the same thing applies to exception policies)

exception_policy.html:

"EP | A type that full fills the requirements of an ExceptionPollicy"
s/full fills/fulfills/, s/ExceptionPollicy/ExceptionPolicy/

"EP::underflow_error(const char * message)"
Underflow is a floating point error that happens
when the value is too close to 0.  (INT_MIN - 1)
is an overflow error, not an underflow error.

"template<void (*F)(const char *), void (*G)(const char *), void
(*H)(const char *)>
boost::numeric::no_exception_support"
What do the arguments mean?  According to the table,
there a 6 functions, not 3.

safe.html:

"T | Underlying type from which a safe type is being derived"
"derived" has a specific meaning in C++ which is not what you mean.

"...at runtime if any of several events result in an incorrect
arithmetic type."
It isn't the type that's incorrect, is it?

"If one wants to switch between save and built-in types ..."
s/save/safe/

"this type of code will have to be fixed so that implicit
conversions to built-in types do not occur"
So are explicit casts allowed?

"This is because there is no way to guarantee that the expression i * i"
The second 'i' seems to be outside the <code> block.

"This can be justified by the observe"
This sentence is incomplete.

"This can be done by specifying a non-default type promotion policy
automatic"
Should be "policy, automatic."  Also, automatic should probably be <code>

"All the work is done at compile time - checking for exceptions
necessary (input is of course an exception)"
I can't parse this sentence.  Also, the '-' should
be an em-dash, I think.

safe_range.html:

"MIN | must be non-integer literal"
I think this is supposed to be "non-negative integer"
Also, this is only true for safe_unsigned_range.
It would by weird if safe_signed_range didn't
accept negative arguments.

#include <safe/numeric/safe_range.hpp>
The directory safe/numeric/ doesn't exist.

    safe_unsigned_range<7, 24> i
missing ';'

static_assert(std::is_same<decltype(k), safe<unsigned int>);
Syntax error: expected '>' befor ')'

Also, safe is defined in safe_integer.hpp which is not #included.

promotion_policies/native.html:
"It's main function is to trap incorrect..."
s/It's/Its/

promotion_policies/cpp.html

"...we can force the evaluation of C++ expressions test environment..."
"...expressions /in the/ test..."

Unless I'm misunderstanding something, the use
of trap_exception will fail to compile at:
d *= velocity;


"Note the usage of the compile time trap policy in order to
detect at compile time any possible error conditions. As I
write this, this is still being refined. Hopefully this will
be available by the time you read this."
This comment seems out-dated.

exception_policies.html:

"no_exception_support<O, U = O, R =O, D = O>"
So now we have 4 parameters?

exception_policies/trap_exception.html:

"This exception policy will trap at compile time any
operation COULD result in a runtime exception"
There's a missing word here.  "...compile time /if/ any..."
is one possible fix.

exception_policies/no_exception_support.html

Template Parameters table: waaaaaay too many O's.
Also, the first two parameters (no exception and
uninitialized) are missing.

library_implementation.html:

"correct bugs in it, or understand it's limitations"
s/it's/its/

interval.html:

boost/numeric/interval.hpp and boost::numeric::interval
are already taken by Boost.Interval.

checked_result.html:

"checked_result(e, msg)"
'e' was not defined.

"static_cast<const char *>(c) | R | extract wrapped value - asserts if
not possible"
I think this is supposed to return the message as 'const char *'
not the value as 'R'.

exception_type.html:

"dispatch(overflow_error, "operation resulted in overflow");"
The template parameter EP is missing.

rationale.html:

"2. Can safe types be used as drop-in replacement for built-in types?"
replacements (plural)

"#include  <cstdint>
typedef safe_t<std::int16_t> int16_t;"
This may or may not compile as it is unspecified whether
cstdint defines ::int16_t.

In Christ,
Steven Watanabe


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Review Part 1 (Documentation)

Boost - Dev mailing list
On 3/2/17 12:12 PM, Steven Watanabe via Boost wrote:
> AMDG
>
> All comments are as of 3bcfabe1cafb8ef6edf08d3bd8730a1555b1c45d

Very, very helpful.  I'm amazed that I overlook this stuff myself
and that you pick up on it so easily.

The observations that I have not commented on I consider "no brainers"
and I will just fix as recommended.

My inclination is to not fix these things right now.  I don't want to
present reviewers with a moving target as I think it can confuse
things a lot.  I'm thinking that when I get most of the input
I'm going to get, I'll make one final version which incorporates
all the non-controversial changes.

Thanks again for your invaluable observations.

Robert Ramey

>
> General notes:
>
> You should have a root index.html that redirects
> to the top level documentation page.
>
> There are two copies of proposal.pdf in the
> repository.
>
> safe_numerics.pdf seems to be out-dated.

Hmmm- I wasn't aware that these are in the repository - I'll remove them
>
> VS2015: FAIL (as expected)
>
> directory structure is not quite boost ready.
> (needs include/boost/numeric/ instead of just include/)

I know.  but this is not a requirement for review.

>
> The include paths used in the documentation are a mess.
> I see at least 6 different variations:
> - boost/safe_numerics/include/
> - boost/safe_numerics/
> - safe/numeric/
> - boost/numeric/
> - safe_numerics/include/
> - ../include
> - ../../include

Hmmm - where - in the documentation? In the examples? or?.

I've been dis-inclined to tightly couple this to boost unless it's accepted.

>
> In boostbook, <code language="c++"> enables C++
> syntax highlighting for code snippets.  (This is
> enabled by default for programlisting, but <code>
> is used for non-C++ content enough that it needs
> to be specified.)
Hmmm - right now I'm using program listing and it looks good - syntax is
colorized.  For inline I'm using <code ...> without the language.  Are
you suggesting that I change the inline to <code language="c++" ... ?

> Introduction:
>
> "Since the problems and their solution are similar, We'll..."
> "We" should not be capitalized.
>
> "...the results of these operations are guaranteed to be
> either arithmetically correct or invoke an error"
> "either ... or ..." here has a noun on one side and
> a verb on the other.  Try "... either to be ... or to invoke ..."
>
> "#include <boost/safe_numeric/safe_integer.hpp>"
> I believe that this should be "safe_numerics"
> (with an "s")
>
> "Enforce of other program requirements using ranged integer types."
> Delete "of"
>
> "The library includes the types safe_range(Min, Max) and safe_literal(N)"
> These don't look like any C++ type I've ever seen unless
> they're macros.

Right - this was a sore point.  I can't make them types.  I think I've
experimented with making them macros but I'm entirely satisfied with the
either.  It's a point which needs to be refined.

>
> tutorial/1.html
>
> "When this occurs, Most"
> lowercase "Most"
>
> "When this occurs, Most C++ compilers will continue to execute with no
> indication that the results are wrong"
> Is it the compiler that continues to execute or is it the
> program built by the compiler?

I meant the compiled program.  But now I'm coming upon cases where the
compiler traps in an invalid constexpr expression.

>
> "        // converts variables to int before evaluating he expression!"
> s/he/the/
>
> "The solution is to replace instances of char type with safe<char> type"
> There are no char's in this example.
>
> tutorial/2.html:
>
> "This is a common problem with for loops"

> Is this really true?
LOL - I can't prove it's a common problem with for loops.
 > The most common pattern
> of integer for loops is for(i=0;i<n;++i)
> which can only overflow when n is a wider
> type than i.  I usually see this with int/size_t,
right
> but that will get a signed/unsigned comparison
> warning on some compilers.
I've been surprised at how often compilers don't flag this.  Maybe I
have to include some more switches.

>
> tutorial/5.html:
>
> Creating the bounds type is also somewhat error-prone.
> It would be quite easy for C++ programmers who are used
> to half-open ranges to use safe_unsigned_range<0, i_array.size()>
>
> tutorial/7.html
>         std::cout << x / y;
>         std::cout << " error detected!" << std::endl;
> Should this be "NOT detected"?
>
> eliminate_runtime_penalty.html:
>
> The name trap_exception doesn't convey the meaning
> of causing a compile-time error to me.  To me the term
> 'trap' implies a signal handler.

Ahh names - Boost's favorite subject. I think I use "trap"
to indicate compile time detections: static_assert, errors detected
in constexpr expressions at compile time, etc.  I wanted to distinguish
these from runtime exceptions, assert (which I don't think I use) and
other runtime phenomena.  Signals hasn't come up.  I still think
this is a good convention - at least in this context.

>
> eliminate_runtime_penalty/1.html:
>
> The links for native, automatic, and trap_exception are broken.
>
>
> eliminate_runtime_penalty/3.html
>     throw_exception // these variables need to
> need to what?
>
> integer.html:
>
> "std::numeric_limits<T>.is_integer"
> should be "::is_integer"
>
> safe_numeric_concept.html:
>
> "However, it would be very surprising if any implementation were to be
> more complex that O(0);"
> I think you mean O(1).  O(0) means that it takes no time at all.
Hmm - to me O(0) is fixed time while O(1) is time proportional to x^1.
But I'll rephrase the O() notation which is confusing in this context.

>
> "... all C++ operations permitted on it's base type..."
> s/it's/its/
>
> "( throw exception, compile time trap, etc..)"
> Delete the leading space.
>
> promotion_policy.html:
>
> "auto sz = sx + y; // promotes expression type to a safe<long int> which
> requires no result checking
> is guaranteed not to overflow."
> The line wrapping ends up uncommented.
>
> What happens for binary operators when the arguments
> have different promotion policies?  Is it an error?
> Does the implementation randomly pick one?  Is there
> some kind of precedence rule for promotion policies?
> (Note: the same thing applies to exception policies)

In both cases:
        at least one policy must be non void
        if both are non void they must be equal

I'll clarify this.

>
> exception_policy.html:
>
> "EP | A type that full fills the requirements of an ExceptionPollicy"
> s/full fills/fulfills/, s/ExceptionPollicy/ExceptionPolicy/
>
> "EP::underflow_error(const char * message)"
> Underflow is a floating point error that happens
> when the value is too close to 0.  (INT_MIN - 1)
> is an overflow error, not an underflow error.

Right.  There is no underflow error thrown in the current code.  When I
made this I had the hope that safe_float would be supported by this
library and I still have that hope someday.  So I specified it here.

There are other questions besides.  Suppose one has a type "rational
number" represented by a pair of integers.  I would hope that a
"rational number" would fulfill the type requirements of this library
ant might be usable as a T for safe<T>.  But underflow would be
applicable here.  Of course this opens up all kinds of issues which
I don't think we want to explore here

But for these reasons, I left it in even though the current
implementation has no reason to use it.

> "template<void (*F)(const char *), void (*G)(const char *), void
> (*H)(const char *)>
> boost::numeric::no_exception_support"
> What do the arguments mean?  According to the table,
> there a 6 functions, not 3.

Right - will fix.  The code in exception_policies has the correct
number of arguments.
>
> safe.html:
>
> "T | Underlying type from which a safe type is being derived"
> "derived" has a specific meaning in C++ which is not what you mean.
hmm - OK - I can change that to "Underlying type which a safe type is
based".  This will work better with the code which uses safe_base for
some trait or other.

>
> "...at runtime if any of several events result in an incorrect
> arithmetic type."
> It isn't the type that's incorrect, is it?
>
> "If one wants to switch between save and built-in types ..."
> s/save/safe/
>
> "this type of code will have to be fixed so that implicit
> conversions to built-in types do not occur"

> So are explicit casts allowed?
They are.  the casting operator is overloaded to guarantee that
either the value is preserved or an error is invoked.

safe<int> i = 1024;
static_cast<char>(i); // will throw exception

constexpr safe<int> i = 1024;
constexpr j = static_cast<char>(i) // will trap a compile time

also implicit casts between safe types are permitted - but they
are checked for errors.

The above is inspired with the goal of achieving drop-in
replacability.

>
> "This is because there is no way to guarantee that the expression i * i"
> The second 'i' seems to be outside the <code> block.
>
> "This can be justified by the observe"
> This sentence is incomplete.
>
> "This can be done by specifying a non-default type promotion policy
> automatic"
> Should be "policy, automatic."  Also, automatic should probably be <code>
>
> "All the work is done at compile time - checking for exceptions
> necessary (input is of course an exception)"
> I can't parse this sentence.  Also, the '-' should
> be an em-dash, I think.
>
> safe_range.html:
>
> "MIN | must be non-integer literal"
> I think this is supposed to be "non-negative integer"
> Also, this is only true for safe_unsigned_range.
> It would by weird if safe_signed_range didn't
> accept negative arguments.
>
> #include <safe/numeric/safe_range.hpp>
> The directory safe/numeric/ doesn't exist.
>
>     safe_unsigned_range<7, 24> i
> missing ';'
>
> static_assert(std::is_same<decltype(k), safe<unsigned int>);
> Syntax error: expected '>' befor ')'
>
> Also, safe is defined in safe_integer.hpp which is not #included.

this is included by safe_range.hpp.  But I agree that if I use
safe<..> I should include safe_integer.hpp even though it's
redundent.

>
> promotion_policies/native.html:
> "It's main function is to trap incorrect..."
> s/It's/Its/
>
> promotion_policies/cpp.html
>
> "...we can force the evaluation of C++ expressions test environment..."
> "...expressions /in the/ test..."
>
> Unless I'm misunderstanding something, the use
> of trap_exception will fail to compile at:
> d *= velocity;

Hmmm - why do you think that?

This is an example take from a real project - (for the tiny PIC
processor).  I didn't make clear but LEMPARAMETER is a typedef
for a 16 bit integer.  So this should never trap...

Damn - I see you're correct here.  Now if I had written

d = velocity * velocity;

I would be golden!

Nope, then there is the possibility that the d can't fit into a uint16
so it could still be wrong.  I guess this illustrates the impossibility
for normal people to actually write demonstrably correct code.

I'm thinking the example should look more like:

// return value in steps
/*
Use the formula:
     stopping dist = v **2 / a / 2
*/
uint16 get_stopping_distance(LEMPARAMETER velocity){
     return velocity * velocity / lem.acceleration / 2;
}

That should be provable correct !!!

Damn - the correct return of uint16 can't be guaranteed at compile
unless we do:

constexpr uint16 get_stopping_distance(LEMPARAMETER velocity){
     return velocity * velocity / lem.acceleration / 2;
}

and only call with constexpr argument.


Ahhh - finally I see your point.  assignment using d as
an accumuator loses the range calculated at compile time
so subsequent operations can't be guaranteed to not
overflow.

I'll expand discussion of this example.

> "Note the usage of the compile time trap policy in order to
> detect at compile time any possible error conditions. As I
> write this, this is still being refined. Hopefully this will
> be available by the time you read this."
> This comment seems out-dated.
>
> exception_policies.html:
>
> "no_exception_support<O, U = O, R =O, D = O>"
> So now we have 4 parameters?
>
> exception_policies/trap_exception.html:
>
> "This exception policy will trap at compile time any
> operation COULD result in a runtime exception"
> There's a missing word here.  "...compile time /if/ any..."
> is one possible fix.
>
> exception_policies/no_exception_support.html
>
> Template Parameters table: waaaaaay too many O's.
> Also, the first two parameters (no exception and
> uninitialized) are missing.
>
> library_implementation.html:
>
> "correct bugs in it, or understand it's limitations"
> s/it's/its/
>
> interval.html:
>
> boost/numeric/interval.hpp and boost::numeric::interval
> are already taken by Boost.Interval.

Right

I looked at the boost.interval library to see what I might
be able to use.  Looks to me like an underrated gem.  but it
only address floating point and doesn't (of course) support
constexpr.  I don't know if it would have any role in some
future safe_float.

>
> checked_result.html:
>
> "checked_result(e, msg)"
> 'e' was not defined.
>
> "static_cast<const char *>(c) | R | extract wrapped value - asserts if
> not possible"
> I think this is supposed to return the message as 'const char *'
> not the value as 'R'.

I see this is out of whack.

check_result<R> returns a union (or monad if you must) which contains
either the result of type R or an instance of "exception_type" described
in the next section.  Will address.

>
> exception_type.html:
>
> "dispatch(overflow_error, "operation resulted in overflow");"
> The template parameter EP is missing.
>
> rationale.html:
>
> "2. Can safe types be used as drop-in replacement for built-in types?"
> replacements (plural)
>
> "#include  <cstdint>
> typedef safe_t<std::int16_t> int16_t;"
> This may or may not compile as it is unspecified whether
> cstdint defines ::int16_t.
>
> In Christ,
> Steven Watanabe
>
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Review Part 1 (Documentation)

Boost - Dev mailing list
AMDG

On 03/02/2017 03:47 PM, Robert Ramey via Boost wrote:
> On 3/2/17 12:12 PM, Steven Watanabe via Boost wrote:
>> <snip>
>>
>> The include paths used in the documentation are a mess.
>> I see at least 6 different variations:
>> - <snip>
>
> Hmmm - where - in the documentation? In the examples? or?.
>

  This was just from reading the html documentation.
the code taken from example/ constently uses ../include/
(probably because it was actually tested and
this variation is one that works).
The reference sections under Type Requirements and
Types have the most variations such as:

Header
#include <safe/numeric/safe_literal.hpp>

Header
#include <boost/safe_numerics/safe_integer.hpp>


>>
>> In boostbook, <code language="c++"> enables C++
>> syntax highlighting for code snippets.  (This is
>> enabled by default for programlisting, but <code>
>> is used for non-C++ content enough that it needs
>> to be specified.)
> Hmmm - right now I'm using program listing and it looks good - syntax is
> colorized.  For inline I'm using <code ...> without the language.  Are
> you suggesting that I change the inline to <code language="c++" ... ?
>

  Yes.  This is just a minor stylistic detail.
The only real difference is that it will change
the color of int in inline code.

> <snip>
>
>> "However, it would be very surprising if any implementation were to be
>> more complex that O(0);"
>> I think you mean O(1).  O(0) means that it takes no time at all.
> Hmm - to me O(0) is fixed time while O(1) is time proportional to x^1.
> But I'll rephrase the O() notation which is confusing in this context.
>

The formal definition of O() is
f(x) \in O(g(x)) iff. \exists C, x_0: \forall x > x_0: |f(x)| <= |Cg(x)|

If g(x) = 0, then this reduces to
\exists x_0 \forall x > x_0: f(x) = 0
i.e. the running time is zero.

If g(x) = 1, then we have
\exists x_0 \forall x > x_0: |f(x)| <= |C|
meaning that the running time is less than
some constant, regardless of the value of x.
...which brings us to the real problem with
using big-O here: what is x?  For sequence
algorithms, x is the length of the sequence,
but there's nothing like that here.

>> <snip>
>> What happens for binary operators when the arguments
>> have different promotion policies?  Is it an error?
>> Does the implementation randomly pick one?  Is there
>> some kind of precedence rule for promotion policies?
>> (Note: the same thing applies to exception policies)
>
> In both cases:
>     at least one policy must be non void
>     if both are non void they must be equal
>
> I'll clarify this.
>

Err, what is a void policy?

>> <snip>
>> "EP::underflow_error(const char * message)"
>> Underflow is a floating point error that happens
>> when the value is too close to 0.  (INT_MIN - 1)
>> is an overflow error, not an underflow error.
>
> Right.  There is no underflow error thrown in the current code.

I found one use (probably accidental) at checked.hpp:410.

> <snip>
>
>> Also, safe is defined in safe_integer.hpp which is not #included.
>
> this is included by safe_range.hpp.  But I agree that if I use
> safe<..> I should include safe_integer.hpp even though it's
> redundent.

safe_range.hpp only includes safe_base.hpp.
The only reason I noticed this was that
Intellisense highlighted safe as undefined.

>
>> <snip>
>> promotion_policies/cpp.html
>>
>> Unless I'm misunderstanding something, the use
>> of trap_exception will fail to compile at:
>> d *= velocity;
>
> <snip>
>
> ....  I guess this illustrates the impossibility
> for normal people to actually write demonstrably correct code.
>

Tell me about it:
https://github.com/boostorg/random/blob/develop/include/boost/random/uniform_int_distribution.hpp#L93

> <snip>
>
> Ahhh - finally I see your point.  assignment using d as
> an accumuator loses the range calculated at compile time
> so subsequent operations can't be guaranteed to not
> overflow.
>

Yeah.  This makes it a bit inconvenient to use
trap_exception with anything other than a strict
functional style.

In Christ,
Steven Watanabe


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Review Part 1 (Documentation)

Boost - Dev mailing list
On 3/2/17 4:49 PM, Steven Watanabe via Boost wrote:
> AMDG

>> ....  I guess this illustrates the impossibility
>> for normal people to actually write demonstrably correct code.
... without the help of something like this.

> Tell me about it:
> https://github.com/boostorg/random/blob/develop/include/boost/random/uniform_int_distribution.hpp#L93

I took a look at this, and it looks good to me.

>
>> Ahhh - finally I see your point.  assignment using d as
>> an accumuator loses the range calculated at compile time
>> so subsequent operations can't be guaranteed to not
>> overflow.
>>
>
> Yeah.  This makes it a bit inconvenient to use
> trap_exception with anything other than a strict
> functional style.

Actually this turns out to be a very quite interesting point.

I started an example of taking a small micro computer program
for controlling a stepper motor and using the compile time
trapping facility to tweak the program so that it would guaranteed
to not produce an invalid result without invoking any runtime overhead.

Things went pretty well until I had to actually update the to position.
At this point I had to more than "tweak" the program or give up
on my goal of avoiding runtime overhead to guarantee no incorrect
behavior.  At that point I suspended work on the example because
it failed to illustrate my hope that I could take a legacy program
for an foreign processor and make minimal changes to guarantee
correctness w/o runtime overhead.  But the experiment was very
interesting and useful and I hope to get back to it when I
understand the science of computer programming better.

Robert Ramey

>
> In Christ,
> Steven Watanabe
>
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Review Part 2 (Implementation)

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
AMDG

All comments are as of 3bcfabe1cafb8ef6edf08d3bd8730a1555b1c45d

automatic.hpp

15:
// policy which creates results types equal to that of C++ promotions.
// Using the policy will permit the program to build and run in release
// mode which is identical to that in debug mode except for the fact
// that errors aren't trapped.
Copy/paste from native?

83:
        // clause 2 - otherwise if the rank of the unsigned type exceeds
        // the rank of the of the maximum signed type
How is this possible?  The rank of std::intmax_t
is the maximum possible rank.  If the unsigned type
is larger, then rank will return void which gives
a compile error when accessing ::value.
Actually it looks like calculate_max_t was originally
written to return T or U, but was then altered to
use [u]intmax_t.

97:
        // clause 4 - otherwise use unsigned version of the signed type
            std::uintmax_t
It isn't clear to me why uintmax_t is preferable
to intmax_t when neither is able to represent every
possible result.

124:
        using t_base_type = typename base_type<T>::type;
This is called from safe_base_operations:259, which
has already called base_type.

  From the documentation, I had assumed that automatic
used the range from safe_base.  However this range
is not actually passed to the PromotionPolicy, so
automatic always bumps the type to the next largest
integer type, until it reaches [u]intmax_t.  This
means than any expressions with more than a few
terms will always be carried out using [u]intmax_t,
which is potentially inefficient, e.g. intmax_t may
require software implementation if it's larger
than the hardware register size.  A greater
problem arises from the fact that mixed signed/unsigned
arithmetic eventually chooses unsigned when the
range is too large.  If we have a slightly complex
expression with both signed and unsigned terms, the
result type will eventually be selected as uintmax_t,
which may be incapable of representing the result,
thus causing an error.  (I know that this scenario
can also fail with builtin integer types, but
it seems to me that the whole point of 'automatic'
is to make things "just work" without having to
worry about pesky things like signedness)

247:
        using result_base_type = typename boost::mpl::if_c<
            std::numeric_limits<t_base_type>::is_signed
            || std::numeric_limits<u_base_type>::is_signed,
            std::intmax_t,
            std::uintmax_t
        >::type;
Why division doesn't use calculate_max_t like multiplication
deserves some explanation.  Division can overflow
too (numeric_limits<uintmax_t>::max() / 1).

275:
    static constexpr divide(
This function appears to be necessary, but is
not documented as part of the PromotionPolicy
concept.  The same goes for modulus.

328:
    template<typename T, typename U>
    struct left_shift_result
Inconsistent with the documantation of PromotionPolicy
which says: PP::left_shift_result<T>::type.  Same
for right_shift_result, and bitwise_result.

362:
        using type = typename boost::mpl::if_c<
            (sizeof(t_base_type) > sizeof(u_base_type)),
            t_base_type,
            u_base_type
        >::type;
Is it actually correct to combine &, |, and ^
in the same class?  The result of & will always
fit in the smaller type, but | and ^ require
the larger type.

checked.hpp:

90:
            // INT32-C Ensure that operations on signed
            // integers do not overflow
This is the unsigned case.

220:
  template<class R, class T, class U>
  constexpr checked_result<R> add(
Consider the following (assuming 2's complement):
checked::add<int>(INT_MIN, UINT_MAX)
The result should be INT_MAX, but your implementation
will generate an error when converting UINT_MAX
to int.

227:
    if(! rt.no_exception() )
The double negative makes this confusing to read.

230:
    return detail::add<R>(t, u);
This relies on implicit conversion and may generate
spurious warnings.

301:
Everything I said about add also applies to subtract.

411:
                    exception_type::underflow_error,
This should be overflow_error.

467:
Again as with add.  This time the problematic case is
multiply<unsigned>(-1, -1)

539:
    return detail::divide<R>(tx.m_r, ux.m_r);
Just because m_r is public, doesn't mean that
you should access it directly.

583:
  constexpr divide_automatic(
The difference between divide and divide_automatic
could use some explanation.  It looks like the only
difference is that divide_automatic doesn't cast
the divisor, but it isn't clear why.  Also, why
do you need two checked divide functions in the
first place?  There should really just be one
that always works correctly.

622:
    return cast<R>(abs(t) % abs(u));
This likely differs from the builtin % and should be documented
as such.  Also I prefer % to have the property that
t / u * u + t % u = t, which this implementation
does not satisfy.

639:
    // INT34-C C++ standard paragraph 5.8
    if(u > std::numeric_limits<T>::digits){
You're off-by-one
"The behavior is undefined if the right operand
is negative, or greater than *or equal to* the length in
bits of the promoted left operand" (emphasis added)

left_shift and right_shift (625-792) have several problems:
- 640: You're checking the width of T even though the
       actual shift is done after casting to R.
- 696, 708: These overloads serve no purpose.  The
       only difference between them is checks that
       are already handled by check_shift.
- 785: This test duplicates work done be check_shift

checked_result.hpp:

36:    // can't select constructor based on the current status of another

    // checked_result object. So no copy constructor
Saying that there is no copy constructor is misleading,
as the default cctor exists and is used.

67:
        //assert(no_exception());
Why is this commented out?  reading a
not-currently-active member of a union
is undefined behavior.

99:
    constexpr boost::logic::tribool
    operator<=(const checked_result<T> & t) const {
        return ! operator>(t) && ! operator<(t);
    }
This is opertor==. The correct way is just ! operator>(t).

cpp.hpp:

54:
    using rank =
Rank is only used for comparisons.  I don't see
any reason to use instead of using sizeof directly.
This applies to automatic::rank as well.
(Note: rank matters for builtin integer promotion,
because types with the same size can have different
ranks.  Your implementation of rank, however, is
strictly based on size, making it essentially useless.)

exception.hpp:

15:
// contains operations for doing checked aritmetic on NATIVE

// C++ types.
Wrong file.  Also s/aritmetic/arithmetic/ for
wherever this was pasted from.

exception_policies.hpp:

1:
#ifndef BOOST_NUMERIC_POLICIES_HPP
#define BOOST_NUMERIC_POLICIES_HPP
Make this BOOST_NUMERIC_EXCEPTION_POLICIES_HPP?

interval.hpp:

18:
#include <cstdlib> // quick_exit
I don't see quick_exit anywhere in this file.

87:
// account for the fact that for floats and doubles
There's also long double, you know.

101:
namespace {
Please don't use the unnamed namespace in headers.

133:

template<typename R>
constexpr bool less_than(
    const checked_result<R> & lhs,
    const checked_result<R> & rhs
Why is this here and not in checked_result.hpp?
Also this can be implemented easily using operator<:
  return lhs < rhs;
(The implicit conversion from tribool to bool
does exactly what you want here.)

257:
            checked::modulus<R>(t.l, u.l),
            checked::modulus<R>(t.l, u.u),
            checked::modulus<R>(t.u, u.l),
            checked::modulus<R>(t.u, u.u)
This appears to be copied from divide, but it's totally
wrong.  modulus is not monotonic, so you can't get
away with checking only the boundaries.

339:
    const R rl = safe_compare::greater_than(t.l, u.l) ? t.l : u.l;
The implicit cast to R may not be safe.
(same at 340, 356, and 357)

359:
    if(safe_compare::greater_than(rl, ru)){
This check isn't necessary.  The union of two non-empty
intervals can't be empty.

453:
template<>
std::ostream & operator<<(std::ostream & os, const
boost::numeric::interval<unsigned char> & i)
- This specialization is not a template, and can
  only appear in one translation unit.
- The implementation requires <ostream>, but you only #include <iosfwd>

native.hpp:

27:
    // Standard C++ type promotion for expressions doesn't depend
    // on the operation being performed so we can just as well
    // use any operation to determine it.  We choose + for this
    // purpose.
Comment out-dated.  The code (somewhat) uses decltype on the
correct operators.

safe_base.hpp:

240:
    safe_base operator++(){      // pre increment
pre-increment and pre-decrement should return by reference.
post-increment and post-decrement should return by value.
You currently have everything returning by value
except post-decrement (which returns a dangling
reference)

258:
    constexpr safe_base operator-() const { // unary minus
should unary minus allow unsigned to become signed? (subject to
the PromotionPolicy, of course).

261:
    constexpr safe_base operator~() const {
This will fail if Min/Max are anything other than
the full range of the Stored type.

safe_base_operations.hpp:

82:
        indeterminate(t_interval < this_interval),
This works, but it would be a bit more intuitive
if interval had an 'intersects' function.

244:
// Note: the following global operators will be only found via
// argument dependent lookup.  So they won't conflict any
// other global operators for types in namespaces other than
// boost::numeric
Unfortunately, ADL casts a very wide net.
What makes these operators (mostly) safe
is the use of enable_if.

267:
    // Use base_value(T) ( which equals MIN ) to create a new interval. Same
    // for MAX.  Now
Now ... what?

288:
    constexpr static const interval<result_base_type> type_interval =
        exception_possible() ?
            interval<result_base_type>{}
This can overestimate the result interval if the
interval only overflows on one side.

313:
    using type_helper = typename boost::mpl::if_c<
        std::numeric_limits<result_base_type>::is_integer,
        safe_type,
        unsafe_type
    >::type;
This is what?  Support for floating point in the future?

531:
        // when we add the temporary intervals above, we'll get a new
interval
        // with the correct range for the sum !
I think you mean multiply and product, not add and sum.

676:
        constexpr static const interval<result_base_type> type_interval =
            exception_possible() ?
At this point, you lose any benefit of divide_nz over divide.
You should check r_interval.exception() instead of exception_possible().
The same goes for modulus.

748:
    // argument dependent lookup should guarentee that we only get here
I don't understand this comment.

867:
            static_cast<result_base_type>(base_value(t))

            % static_cast<result_base_type>(base_value(u))
Okay, we have a problem here: checked::modulus does /not/
have the same behavior as the builtin % when the divisor
is negative, which means that treating them as interchangable
here will result in weird and inconsistent behavior (not to
mention silently producing values outside the expected interval.)

907:
typename boost::lazy_enable_if_c<
    ...,
    boost::mpl::identity<bool>
enable_if_c with bool should work just as well.

932:
        // if the ranges don't overlap
        (! boost::logic::indeterminate(r)) ?
            // values in those ranges can't be equal
            false
This is operator<, not operator==.  You should return r, not false here.
Same at 970 in operator>.  Also, you're already implementing
operator<= and >= in terms of < and >.  Is there a reason
to implement > separately?  (t > u) == (u < t)

1221:
    // handle safe<T> << int, int << safe<U>, safe<T> << safe<U>
    // exclude std::ostream << ...
Copy/paste.  Should be >> and istream.

1276:

            base_value(std::numeric_limits<T>::max())
This doesn't take the input range into account and
can drastically overestimate the result range.

1401:
    using bwr = bitwise_or_result<T, U>;
I think it would be slightly better to create an alias
template<class T, class U>
using bitwise_xor_result = bitwise_or_result<T, U>;
instead of using bitwise_or_result directly in
the implementation of xor.  (Note that the range
of ^ can be larger than that of | if the lower bound
of the parameters is greater than 0.)

1432, 1474:
    class P, // promotion polic
s/polic/policy/

1435:
std::ostream & operator<<(
    std::ostream & os,
Should this use std::basic_ostream<CharT, Traits>?

safe_common.hpp:

safe_compare.hpp:

safe_integer.hpp:

safe_literal.hpp:

111:
    constexpr safe_literal_impl operator-() const { // unary minus
Err, what?  This will work iff. N is 0.

140:
#define safe_literal(n)                               \
This is not acceptable.  It totally violates
the naming rules for macros.

safe_range.hpp:

utility.hpp:

26:
    constexpr log(T x){
A function with a common name, that is not in a private
namespace, and can match almost anything is a really
bad idea.

concept/exception_policy.hpp:

The reason that the check is commented out deserves
explanation.  (Missing functions are acceptible and
will cause a compile-time error instead of checking
at runtime.)

concept/*.hpp: no comments

In Christ,
Steven Watanabe


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Review Part 2 (Implementation)

Boost - Dev mailing list
I've spent a lot of time going over this and have incorporated changes
to address most of the points.  There are still some things I'm working
on though.

I think the major problem is confusion about what promotion policy
should do and what should the requirements on the types it accepts.
Factoring out the promotion policy is necessary to keep things
sort of understandable and keeping safe operations from turning into
a huge ball of spagetti.  In retrospect, I'm beginning to wonder
of automatic type promotion was a great idea.  In any event, this
will take me a couple of more days to sort through.

I've also incorporated almost all your observations of documentation.

I must say that I'm in awe of your abilities to read through and
understand 6000 lines of C++ template code, 1000 lines of examples, and
52 pages of documentation (when rendered as a pdf) in such a short
time.  I spend a long, long time developing this, and I dare say
I think you already understand it better than I do.  Clearly
we have different brain structure - I guess that's a good thing.

I have to say I love Boost.

Robert Ramey


On 3/6/17 7:35 AM, Steven Watanabe via Boost wrote:

> AMDG
>
> All comments are as of 3bcfabe1cafb8ef6edf08d3bd8730a1555b1c45d
>
> automatic.hpp
>
> 15:
> // policy which creates results types equal to that of C++ promotions.
> // Using the policy will permit the program to build and run in release
> // mode which is identical to that in debug mode except for the fact
> // that errors aren't trapped.
> Copy/paste from native?

Right
>
> 83:
>         // clause 2 - otherwise if the rank of the unsigned type exceeds
>         // the rank of the of the maximum signed type
> How is this possible?The rank of std::intmax_t
> is the maximum possible rank.  If the unsigned type
> is larger, then rank will return void which gives
> a compile error when accessing ::value.

I changed this.

> Actually it looks like calculate_max_t was originally
> written to return T or U, but was then altered to
> use [u]intmax_t.

Right - My intention was that the automatic would reflect
the standard C++ promotions but would return the
[u]intmax_t .  This is later reduced according to
the anticipated range of the result.  So the whole
business is about trying to get the sign right.

I made the changes and reran the tests and nothing changed.

>
> 97:
>         // clause 4 - otherwise use unsigned version of the signed type
>             std::uintmax_t
> It isn't clear to me why uintmax_t is preferable
> to intmax_t when neither is able to represent every
> possible result.

If both operands are signed - then uintmax_t could hold
one more bit than uintmax_t can.  In these cases the
operand result can be twice as large before runtime
checking is necessary.

> 124:
>         using t_base_type = typename base_type<T>::type;
> This is called from safe_base_operations:259, which
> has already called base_type.
>
>   From the documentation, I had assumed that automatic
> used the range from safe_base.  However this range
> is not actually passed to the PromotionPolicy, so
> automatic always bumps the type to the next largest
> integer type, until it reaches [u]intmax_t.  This
> means than any expressions with more than a few
> terms will always be carried out using [u]intmax_t,
> which is potentially inefficient, e.g. intmax_t may
> require software implementation if it's larger
> than the hardware register size.

Clearly this is a sticky point.  There are two issues
here.  One is the result base type and the other is the
min/max range of safe result type returned. The return
safe type includes the proper range but may have
a base type which is larger than necessary.  I'll have
to look into this.

  A greater
> problem arises from the fact that mixed signed/unsigned
> arithmetic eventually chooses unsigned when the
> range is too large.  If we have a slightly complex
> expression with both signed and unsigned terms, the
> result type will eventually be selected as uintmax_t,
> which may be incapable of representing the result,
> thus causing an error.

Hmmm - My thinking for "automatic" was that it should mimic
the "native" C/C++ promotion but return a larger size. I'll
have to think about this some more.

  (I know that this scenario
> can also fail with builtin integer types, but
> it seems to me that the whole point of 'automatic'
> is to make things "just work" without having to
> worry about pesky things like signedness)

Right.  To the extent possible.

Looking at this obvervation, I see that native promotion
has the same problem in that it's unclear whether it
should take safe types or native base types as arguments.
More stuff to think about.

I'm sure that when I started out, I thought I had it
all clear in my mind.  But then as I worked through
my tests and examples, my "fixed" clouded the issue.

>
> 247:
>         using result_base_type = typename boost::mpl::if_c<
>             std::numeric_limits<t_base_type>::is_signed
>             || std::numeric_limits<u_base_type>::is_signed,
>             std::intmax_t,
>             std::uintmax_t
>         >::type;
> Why division doesn't use calculate_max_t like multiplication
> deserves some explanation.  Division can overflow
> too (numeric_limits<uintmax_t>::max() / 1).

Hmmm - I'm sure it's because I've presumed that the result
can aways be contained in the same size type as the number
being divide.  That t / x <= t - and divide by zero is
checked separately. Am I wrong about this?

>
> 275:
>     static constexpr divide(
> This function appears to be necessary, but is
> not documented as part of the PromotionPolicy
> concept.  The same goes for modulus.

Damn - I forgot about this. It couples the operation
with the promotion policy.  I spent an incredible
amount of time trying to avoid doing this.  But in the
end I just figure out how to do it.  So I had to put
these in.

>
> 328:
>     template<typename T, typename U>
>     struct left_shift_result
> Inconsistent with the documantation of PromotionPolicy
> which says: PP::left_shift_result<T>::type.  Same
> for right_shift_result, and bitwise_result.

Right

>
> 362:
>         using type = typename boost::mpl::if_c<
>             (sizeof(t_base_type) > sizeof(u_base_type)),
>             t_base_type,
>             u_base_type
>         >::type;
> Is it actually correct to combine &, |, and ^
> in the same class?  The result of & will always
> fit in the smaller type, but | and ^ require
> the larger type.

Right - I'll fix this

>
> checked.hpp:
>
> 90:
>             // INT32-C Ensure that operations on signed
>             // integers do not overflow
> This is the unsigned case.
>
> 220:
>   template<class R, class T, class U>
>   constexpr checked_result<R> add(
> Consider the following (assuming 2's complement):
> checked::add<int>(INT_MIN, UINT_MAX)
> The result should be INT_MAX,

I'm not seeing this.  With 8 bit ints we've got
INT_MIN = x80 = -128
UINT_MAX = x80 = 128
adding together x100
truncating = 0

which might be OK except on current machines but does violate the
standard in that it undefined behavior.  Given that compiler optimizers
can do anything they wand on UB I don't think we can do this

but your implementation
> will generate an error when converting UINT_MAX
> to int.

As I believe it should -

UINT_MAX = 0x80
convert to int - nothing changes
x80 interpreted as an int is -128

So this would transform an arithmetic value of 128 to
a value of -128.  Not a good thing.

What I've done is applied the standard. Convert each
type to the result type then do the addition.  On this
conversion we change the value - game over - invoke
error.

> 227:
>     if(! rt.no_exception() )
> The double negative makes this confusing to read.

Easy fix

>
> 230:
>     return detail::add<R>(t, u);
> This relies on implicit conversion and may generate
> spurious warnings.

conversion - I don't see it.  detail::add<R>(t, u)
return a checked_result<R> which is returned as another
checked_result<R>.  No conversion.  Probably not even
a copy with copy elusion.
>
> 301:
> Everything I said about add also applies to subtract.

OK - if we can agree on add we'll agree on subtract.
>
> 411:
>                     exception_type::underflow_error,
> This should be overflow_error.
OK
>
> 467:
> Again as with add.  This time the problematic case is
> multiply<unsigned>(-1, -1)

again - same response

>
> 539:
>     return detail::divide<R>(tx.m_r, ux.m_r);
> Just because m_r is public, doesn't mean that
> you should access it directly.

OK

>
> 583:
>   constexpr divide_automatic(
> The difference between divide and divide_automatic
> could use some explanation.  It looks like the only
> difference is that divide_automatic doesn't cast
> the divisor, but it isn't clear why.  Also, why
> do you need two checked divide functions in the
> first place?  There should really just be one
> that always works correctly.

LOL - that's what I thought. I couldn't make one which worked
for both native and automatic promotions.

The case which is giving me fits is dividing INT_MIN / -1

INT_MIN = 0x80 = -128
-128 / -1 = + 128 = 0x80 = INT_MIN again.

So we have INT_MIN / -1 -> INT_MIN an incorrect result.

Now I forget - but this is only a problem with automatic
promotions.

>
> 622:
>     return cast<R>(abs(t) % abs(u));
> This likely differs from the builtin % and should be documented
> as such.

Hmmm - slightly sticky issue here.  I used from &5.6/4

If both operands are nonnegative then the remainder is nonnegative; if
not, the sign of the remainder is implementation-defined.

from ISO14882:2003(e) is no longer present in ISO14882:2011(e)

But even so my implementation should have trapped on negative
operands.

I'll consider what to do about this.

Also I prefer % to have the property that
> t / u * u + t % u = t, which this implementation
> does not satisfy.

I think this can be achieved

>
> 639:
>     // INT34-C C++ standard paragraph 5.8
>     if(u > std::numeric_limits<T>::digits){
> You're off-by-one
> "The behavior is undefined if the right operand
> is negative, or greater than *or equal to* the length in
> bits of the promoted left operand" (emphasis added)
>
> left_shift and right_shift (625-792) have several problems:
> - 640: You're checking the width of T even though the
>        actual shift is done after casting to R.
> - 696, 708: These overloads serve no purpose.  The
>        only difference between them is checks that
>        are already handled by check_shift.
> - 785: This test duplicates work done be check_shift

I've pretty much totally redid left and right shift last weekend.
I think they are more likely to be correct here.

>
> checked_result.hpp:
>
> 36:    // can't select constructor based on the current status of another
>
>     // checked_result object. So no copy constructor
> Saying that there is no copy constructor is misleading,
> as the default cctor exists and is used.
OK
>
> 67:
>         //assert(no_exception());
> Why is this commented out?  reading a
> not-currently-active member of a union
> is undefined behavior.

the assert conflicts with constexpr so I can't use it.
When one tries to cast to an R a compile time - it's trapped
just fine by my clang compiler on my mac.

The second case seems to pass because I never retrieve the
error message at compile time.

>
> 99:
>     constexpr boost::logic::tribool
>     operator<=(const checked_result<T> & t) const {
>         return ! operator>(t) && ! operator<(t);
>     }
> This is opertor==. The correct way is just ! operator>(t).
OK

>
> cpp.hpp:
>
> 54:
>     using rank =
> Rank is only used for comparisons.  I don't see
> any reason to use instead of using sizeof directly.
> This applies to automatic::rank as well.
> (Note: rank matters for builtin integer promotion,
> because types with the same size can have different
> ranks.  Your implementation of rank, however, is
> strictly based on size, making it essentially useless.)

Then I should probably change the implementation so
that types of different rank but the same size (e.g.
int and long on some machines) are handled correctly.
I don't think this is too hard to fix.
>
> exception.hpp:
>
> 15:
> // contains operations for doing checked aritmetic on NATIVE
>
> // C++ types.
> Wrong file.  Also s/aritmetic/arithmetic/ for
> wherever this was pasted from.
OK

>
> exception_policies.hpp:
>
> 1:
> #ifndef BOOST_NUMERIC_POLICIES_HPP
> #define BOOST_NUMERIC_POLICIES_HPP
> Make this BOOST_NUMERIC_EXCEPTION_POLICIES_HPP?
>
> interval.hpp:
>
> 18:
> #include <cstdlib> // quick_exit
> I don't see quick_exit anywhere in this file.
>
> 87:
> // account for the fact that for floats and doubles
> There's also long double, you know.
OK
>
> 101:
> namespace {
> Please don't use the unnamed namespace in headers.

OK - but I forgot why it's not recommended
>
> 133:
>
> template<typename R>
> constexpr bool less_than(
>     const checked_result<R> & lhs,
>     const checked_result<R> & rhs
> Why is this here and not in checked_result.hpp?

Removed

> Also this can be implemented easily using operator<:
>   return lhs < rhs;
> (The implicit conversion from tribool to bool
> does exactly what you want here.)

OK - I'll take your word for it.  used lhs < rhs in a lambda

>
> 257:
>             checked::modulus<R>(t.l, u.l),
>             checked::modulus<R>(t.l, u.u),
>             checked::modulus<R>(t.u, u.l),
>             checked::modulus<R>(t.u, u.u)
> This appears to be copied from divide, but it's totally
> wrong.  modulus is not monotonic, so you can't get
> away with checking only the boundaries.
>
> 339:
>     const R rl = safe_compare::greater_than(t.l, u.l) ? t.l : u.l;
> The implicit cast to R may not be safe.
> (same at 340, 356, and 357)

>
> 359:
>     if(safe_compare::greater_than(rl, ru)){
> This check isn't necessary.  The union of two non-empty
> intervals can't be empty.
OK
>
> 453:
> template<>
> std::ostream & operator<<(std::ostream & os, const
> boost::numeric::interval<unsigned char> & i)
> - This specialization is not a template, and can
>   only appear in one translation unit.
Please expand upon this

> - The implementation requires <ostream>, but you only #include <iosfwd>
Right = but any other which invokes this function will aready have
included ostream.  That's why included <iosfwd> rather than <ostream>
since this operator is used almost exclusively for debugging/examples.

>
> native.hpp:
>
> 27:
>     // Standard C++ type promotion for expressions doesn't depend
>     // on the operation being performed so we can just as well
>     // use any operation to determine it.  We choose + for this
>     // purpose.
> Comment out-dated.  The code (somewhat) uses decltype on the
> correct operators.
>
> safe_base.hpp:
>
> 240:
>     safe_base operator++(){      // pre increment
> pre-increment and pre-decrement should return by reference.
> post-increment and post-decrement should return by value.
> You currently have everything returning by value
> except post-decrement (which returns a dangling
> reference)
Right
>
> 258:
>     constexpr safe_base operator-() const { // unary minus
> should unary minus allow unsigned to become signed? (subject to
> the PromotionPolicy, of course).
LOL - I struggled with this - switched a couple of times back
and forth. I decided to leave the type the same.  That probably
conflicts with automatic promotion.  This still needs thinking
about.

>
> 261:
>     constexpr safe_base operator~() const {
> This will fail if Min/Max are anything other than
> the full range of the Stored type.
Right - but that's a common case.

>
> safe_base_operations.hpp:
>
> 82:
>         indeterminate(t_interval < this_interval),
> This works, but it would be a bit more intuitive
> if interval had an 'intersects' function.
H:mmmm it has intersection.  Would the following be better?

     static_assert(
         intersection<Stored>(t_interval, this_interval).exception(),
         "safe type cannot be constructed with this type"
     );

>
> 244:
> // Note: the following global operators will be only found via
> // argument dependent lookup.  So they won't conflict any
> // other global operators for types in namespaces other than
> // boost::numeric
> Unfortunately, ADL casts a very wide net.
> What makes these operators (mostly) safe
> is the use of enable_if.
>
> 267:
>     // Use base_value(T) ( which equals MIN ) to create a new interval. Same
>     // for MAX.  Now
> Now ... what?
>
> 288:
>     constexpr static const interval<result_base_type> type_interval =
>         exception_possible() ?
>             interval<result_base_type>{}
> This can overestimate the result interval if the
> interval only overflows on one side.

LOL - I never thought about that refinement.  I'll have to think about
this. I'm thinking that this might well apply to most of the operations.

>
> 313:
>     using type_helper = typename boost::mpl::if_c<
>         std::numeric_limits<result_base_type>::is_integer,
>         safe_type,
>         unsafe_type
>     >::type;
> This is what?  Support for floating point in the future?

I think I put this in because of the possibility of
safe<int> x = 10;
float y = 10.0
auto z = x + y;

so z would be a float

Of course now I don't actually remember.  As I write this I don't feel
confident that I actually know what the above will do. Though I'm sure
I had an idea at one time.  I'll take another look.

>
> 531:
>         // when we add the temporary intervals above, we'll get a new
> interval
>         // with the correct range for the sum !
> I think you mean multiply and product, not add and sum.
>
of course
> 676:
>         constexpr static const interval<result_base_type> type_interval =
>             exception_possible() ?
> At this point, you lose any benefit of divide_nz over divide.
> You should check r_interval.exception() instead of exception_possible().
> The same goes for modulus.

Hmmm - still looking at this.

>
> 748:
>     // argument dependent lookup should guarentee that we only get here
> I don't understand this comment.
LOL - that makes two of us.

>
> 867:
>             static_cast<result_base_type>(base_value(t))
>
>             % static_cast<result_base_type>(base_value(u))
> Okay, we have a problem here: checked::modulus does /not/
> have the same behavior as the builtin % when the divisor
> is negative, which means that treating them as interchangable
> here will result in weird and inconsistent behavior (not to
> mention silently producing values outside the expected interval.)

I think fixing checked_modulus will address this

>
> 907:
> typename boost::lazy_enable_if_c<
>     ...,
>     boost::mpl::identity<bool>
> enable_if_c with bool should work just as well.
>
> 932:
>         // if the ranges don't overlap
>         (! boost::logic::indeterminate(r)) ?
>             // values in those ranges can't be equal
>             false
> This is operator<, not operator==.  You should return r, not false here.
> Same at 970 in operator>.  Also, you're already implementing
> operator<= and >= in terms of < and >.  Is there a reason
> to implement > separately?  (t > u) == (u < t)
>
> 1221:
>     // handle safe<T> << int, int << safe<U>, safe<T> << safe<U>
>     // exclude std::ostream << ...
> Copy/paste.  Should be >> and istream.
>
> 1276:
>
>             base_value(std::numeric_limits<T>::max())
> This doesn't take the input range into account and
> can drastically overestimate the result range.

I think I've fixed this.

>
> 1401:
>     using bwr = bitwise_or_result<T, U>;
> I think it would be slightly better to create an alias
> template<class T, class U>
> using bitwise_xor_result = bitwise_or_result<T, U>;
> instead of using bitwise_or_result directly in
> the implementation of xor.  (Note that the range
> of ^ can be larger than that of | if the lower bound
> of the parameters is greater than 0.)
>
> 1432, 1474:
>     class P, // promotion polic
> s/polic/policy/
>
> 1435:
> std::ostream & operator<<(
>     std::ostream & os,
> Should this use std::basic_ostream<CharT, Traits>?
I think so too.

>
> safe_common.hpp:
>
> safe_compare.hpp:
>
> safe_integer.hpp:
>
> safe_literal.hpp:
>
> 111:
>     constexpr safe_literal_impl operator-() const { // unary minus
> Err, what?  This will work iff. N is 0.
that's out
>
> 140:
> #define safe_literal(n)                               \
> This is not acceptable.  It totally violates
> the naming rules for macros.

I'm still struggling with this.  What do you suggest?

>
> safe_range.hpp:
>
> utility.hpp:
>
> 26:
>     constexpr log(T x){
> A function with a common name, that is not in a private
> namespace, and can match almost anything is a really
> bad idea.
Hmmm I could change this to something like bit_count

>
> concept/exception_policy.hpp:
>
> The reason that the check is commented out deserves
> explanation.  (Missing functions are acceptible and
> will cause a compile-time error instead of checking
> at runtime.)
>
> concept/*.hpp: no comments
>
> In Christ,
> Steven Watanabe
>
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Review Part 2 (Implementation)

Boost - Dev mailing list
2017-03-07 18:32 GMT+01:00 Robert Ramey via Boost <[hidden email]>:

247:

>>         using result_base_type = typename boost::mpl::if_c<
>>             std::numeric_limits<t_base_type>::is_signed
>>             || std::numeric_limits<u_base_type>::is_signed,
>>             std::intmax_t,
>>             std::uintmax_t
>>         >::type;
>> Why division doesn't use calculate_max_t like multiplication
>> deserves some explanation.  Division can overflow
>> too (numeric_limits<uintmax_t>::max() / 1).
>>
>
> Hmmm - I'm sure it's because I've presumed that the result
> can aways be contained in the same size type as the number
> being divide.  That t / x <= t - and divide by zero is
> checked separately. Am I wrong about this?
>

I am not sure if this is what Steve meant, but:

  int x = INT_MIN;
  int y = -1;
  int a = x / y; // overflows!

safe<int> does a check for this condition, but I find it quite surprising
that it throws a domain_error rather than overflow_error. How do you make a
call whether it is an overflow or a domain error? For mathematical ints
dividing -2147483648 by -1 is well defined.

Regards,
&rzej;

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Review Part 2 (Implementation)

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
AMDG

On 03/07/2017 10:32 AM, Robert Ramey via Boost wrote:

>
> On 3/6/17 7:35 AM, Steven Watanabe via Boost wrote:
>>
>> automatic.hpp
>>
>> 97:
>>         // clause 4 - otherwise use unsigned version of the signed type
>>             std::uintmax_t
>> It isn't clear to me why uintmax_t is preferable
>> to intmax_t when neither is able to represent every
>> possible result.
>
> If both operands are signed - then uintmax_t could hold
> one more bit than intmax_t can.  In these cases the
> operand result can be twice as large before runtime
> checking is necessary.
>

Right, but uintmax_t can't hold
negative values which /also/ require
runtime checking.

>
>>
>> 247:
>>         using result_base_type = typename boost::mpl::if_c<
>>             std::numeric_limits<t_base_type>::is_signed
>>             || std::numeric_limits<u_base_type>::is_signed,
>>             std::intmax_t,
>>             std::uintmax_t
>>         >::type;
>> Why division doesn't use calculate_max_t like multiplication
>> deserves some explanation.  Division can overflow
>> too (numeric_limits<uintmax_t>::max() / 1).
>
> Hmmm - I'm sure it's because I've presumed that the result
> can aways be contained in the same size type as the number
> being divide.  That t / x <= t - and divide by zero is
> checked separately. Am I wrong about this?
>

Yes.  There are two specific cases that can
cause problems:
- the example I gave above returns intmax_t which cannot
  represent the maximum value of uintmax_t.
- std::numeric_limits<intmax_t>::min() / -1, which is
  also out of range.

>
>>
>> checked.hpp:
>>
>> 220:
>>   template<class R, class T, class U>
>>   constexpr checked_result<R> add(
>> Consider the following (assuming 2's complement):
>> checked::add<int>(INT_MIN, UINT_MAX)
>> The result should be INT_MAX,
>
> I'm not seeing this.  With 8 bit ints we've got
> INT_MIN = x80 = -128
> UINT_MAX = x80 = 128
> adding together x100
> truncating = 0
>

UINT_MAX (not INT_MAX) = 0xFF = 255

> which might be OK except on current machines but does violate the
> standard in that it undefined behavior.  Given that compiler optimizers
> can do anything they wand on UB I don't think we can do this
>
>> but your implementation
>> will generate an error when converting UINT_MAX
>> to int.
>
> As I believe it should -
>
> UINT_MAX = 0x80
> convert to int - nothing changes
> x80 interpreted as an int is -128
>
> So this would transform an arithmetic value of 128 to
> a value of -128.  Not a good thing.
>
> What I've done is applied the standard. Convert each
> type to the result type then do the addition.  On this
> conversion we change the value - game over - invoke
> error.
>

My mental model for the correct behavior of checked::add is:
- add the values as if using infinite precision arithmetic
- truncate the result to result_type
- if the value doesn't fit, then return an error

  I notice that safe_compare does extra work
to allow comparisons to give the correct
mathematical result even if casts would
fail.  I believe the checked::xxx should
do the same.  In addition, *not* following
this rule is precisely the reason for the
problems that you've encountered with divide.

>>
>> 230:
>>     return detail::add<R>(t, u);
>> This relies on implicit conversion and may generate
>> spurious warnings.
>
> conversion - I don't see it.  detail::add<R>(t, u)
> return a checked_result<R> which is returned as another
> checked_result<R>.  No conversion.  Probably not even
> a copy with copy elusion.

I meant implicit conversion of t and u.
(Is the PromotionPolicy forbidden from
returning a type smaller than the arguments?)

>>
>>
>> 583:
>>   constexpr divide_automatic(
>> The difference between divide and divide_automatic
>> could use some explanation.  It looks like the only
>> difference is that divide_automatic doesn't cast
>> the divisor, but it isn't clear why.  Also, why
>> do you need two checked divide functions in the
>> first place?  There should really just be one
>> that always works correctly.
>
> LOL - that's what I thought. I couldn't make one which worked
> for both native and automatic promotions.
>
> The case which is giving me fits is dividing INT_MIN / -1
>
> INT_MIN = 0x80 = -128
> -128 / -1 = + 128 = 0x80 = INT_MIN again.
>
> So we have INT_MIN / -1 -> INT_MIN an incorrect result.
>
> Now I forget - but this is only a problem with automatic
> promotions.
>

Try this way:
if (u == 0) error;
auto uresult = checked::abs(t)/checked::abs(u);
if (sign(t) != sign(u))
  return checked::negate<R>(uresult);
else
  return cast<R>(uresult);

This should work correctly in all cases
as long as you handle all the sticky cases
in checked::abs and a hypothetical checked::negate.

>>
>>
>> interval.hpp:
>>
>> 101:
>> namespace {
>> Please don't use the unnamed namespace in headers.
>
> OK - but I forgot why it's not recommended

  Code in the unnamed namespace will be distinct
in every translation unit.  Using the unnamed
namespace in a header means that every translation
unit that #includes it will get a copy of this
code, and linker will not merge the duplicates.

>>
>> 453:
>> template<>
>> std::ostream & operator<<(std::ostream & os, const
>> boost::numeric::interval<unsigned char> & i)
>> - This specialization is not a template, and can
>>   only appear in one translation unit.
> Please expand upon this
>

If you #include this header from multiple
translation units, it will fail to link,
with a multiple definition error.

>> - The implementation requires <ostream>, but you only #include <iosfwd>
> Right = but any other which invokes this function will aready have
> included ostream.  That's why included <iosfwd> rather than <ostream>
> since this operator is used almost exclusively for debugging/examples.

This operator requires ostream to be
complete /at point of definition/, not when
it called.  i.e.
  #include "interval.hpp"
  #include <ostream>
will fail.  I'm assuming that interval.hpp
doesn't indirectly #include <ostream>.

>>
>> safe_literal.hpp:
>>
>> 111:
>>     constexpr safe_literal_impl operator-() const { // unary minus
>> Err, what?  This will work iff. N is 0.
> that's out
>>
>> 140:
>> #define safe_literal(n)                               \
>> This is not acceptable.  It totally violates
>> the naming rules for macros.
>
> I'm still struggling with this.  What do you suggest?

  I suggest leaving it out.  Any name that can
legitimately be used as a macro would be more
cumbersome than writing out safe_signed_literal
or safe_unsigned_literal.

In Christ,
Steven Watanabe


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Review Part 2 (Implementation)

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
On 3/7/17 11:33 AM, Andrzej Krzemienski via Boost wrote:
> safe<int> does a check for this condition, but I find it quite surprising
> that it throws a domain_error rather than overflow_error. How do you make a
> call whether it is an overflow or a domain error? For mathematical ints
> dividing -2147483648 by -1 is well defined.

Truth is I didn't take a whole lot of care in deciding which one to use.
In this case, I think your right that overflow would be a better choice.



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Review Part 3 (Tests + Summary)

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
AMDG

I vote to *ACCEPT* safe_numerics into Boost.

The following issues *must* be resolved before inclusion:
- Incorrect results for division (see below)
- Linker errors from multiple definition (see below)
- Name conflict for interval
- The whole mess with automatic.  (Probably either allow
  it to see the range of safe_base somehow or remove it.)
- Test cases must verify results, not just exception or not
- The precise rules for determining whether an operator
  throws must be specified.  (My preferred definition:
  An operator throws iff. it is mathematically undefined
  or the mathematical result cannot be represented in the
  result type.)

All comments are as of 3bcfabe1cafb8ef6edf08d3bd8730a1555b1c45d

The tests for the most part seem to check only
whether the operation succeeds or fails.  They
don't check whether the result is actually
correct or not.

test_z.cpp appears to be entirely #if 0'd
test0.cpp doesn't seem to do anything that
 isn't handled by the main tests.

test_divide.hpp:129:

                assert(result == unsafe_result);
unsafe_result is uninitialized.  (Most other uses are commented out)

test_values.hpp:
This is missing 0 which is kind of an important case.

I've attached my own test case for +-*/.
There's some behavior that I consider a
little odd which I noted on lines 64-72.  Also
it shows some incorrect results for division
with automatic and negative [signed] char.
for example:
generic_test.cpp(76): error: in "test_div": check base_value(f(t, u)) ==
expected has failed [1 != 0]
Failure occurred in a following context:
    -85 [safe_base<signed char,-128,127,automatic>] / 3464536379
[safe_base<unsigned int,0,4294967295,automatic>] ->
[safe_base<__int64,-9223372036854775808,9223372036854775807,automatic>]

I had to make a couple minor edits to get the library
to compile with vc15 (just released).  Patch attached.

When I include the entire library in two translation units
and link them together I get errors:

test1.obj : error LNK2005: "class std::basic_ostream<char,struct
std::char_traits<char> > & __cdecl std::operator<<<signed char>(class
std::basic_ostream<char,struct std::char_traits<char> > &,struct
boost::numeric::checked_result<signed char> const &)"
(??$?6C@std@@YAAAV?$basic_ostream@DU?$char_traits@D@std@@@0@AAV10@ABU?$checked_result@C@numeric@boost@@@Z)
already defined in include_all.obj
test1.obj : error LNK2005: "class std::basic_ostream<char,struct
std::char_traits<char> > & __cdecl std::operator<<<unsigned char>(class
std::basic_ostream<char,struct std::char_traits<char> > &,struct
boost::numeric::checked_result<unsigned char> const &)"
(??$?6E@std@@YAAAV?$basic_ostream@DU?$char_traits@D@std@@@0@AAV10@ABU?$checked_result@E@numeric@boost@@@Z)
already defined in include_all.obj
test1.obj : error LNK2005: "class std::basic_ostream<char,struct
std::char_traits<char> > & __cdecl std::operator<<<unsigned char>(class
std::basic_ostream<char,struct std::char_traits<char> > &,struct
boost::numeric::interval<unsigned char> const &)"
(??$?6E@std@@YAAAV?$basic_ostream@DU?$char_traits@D@std@@@0@AAV10@ABU?$interval@E@numeric@boost@@@Z)
already defined in include_all.obj
test1.obj : error LNK2005: "class std::basic_ostream<char,struct
std::char_traits<char> > & __cdecl std::operator<<<signed char>(class
std::basic_ostream<char,struct std::char_traits<char> > &,struct
boost::numeric::interval<signed char> const &)"
(??$?6C@std@@YAAAV?$basic_ostream@DU?$char_traits@D@std@@@0@AAV10@ABU?$interval@C@numeric@boost@@@Z)
already defined in include_all.obj

  I tried using boost::format(...) % safe<T>() and it
fails because your operator% matches.  I notice
that you have checks for ostream and istream with
the shift operators, but it would really be better
to make it more generic by checking is_arithmetic
(or perhaps std::numeric_limits::is_specialized)
on all the operators.

In Christ,
Steven Watanabe



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

generic_test.cpp (8K) Download Attachment
safe_numerics_vc15.patch (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Formal review starts today

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list


> -----Original Message-----
> From: Boost [mailto:[hidden email]] On Behalf Of Andrzej Krzemienski via Boost
> Sent: 01 March 2017 23:37
> To: [hidden email]; [hidden email]
> Cc: Andrzej Krzemienski
> Subject: [boost] [safe_numerics] Formal review starts today
>
> Hi Everyone,
> Today, on March 2nd (or tomorrow, depending on your timezone), is the start
> of the formal review of safe_numerics library by Robert Ramey. The review
> will run till March 11th. I will be serving as review manager.
>
> safe_numerics is a set of drop-in replacements for the built-in integer
> types, which make the guarantee that the results of operations are either
> correct according to the rules of integer arithmetic or report a
> diagnosable error (rather than offering modulo arithmetic results, or
> resulting in undefined behavior, or returning incorrect results as in the
> case of signed vs. unsigned comparisons).
>
> The library can be found on GitHub:
> https://github.com/robertramey/safe_numerics/
>
> Online docs:
> http://htmlpreview.github.io/?https://github.com/robertramey/safe_numerics/
> master/doc/html/index.html
>
> Formal Review comments can be posted publicly on the mailing list or Boost
> Library Incubator <http://blincubator.com>, or sent privately to the review
> manager, that is myself.
>
> Here are some questions you might want to answer in your review:
>
>    - What is your evaluation of the design?

Complicated to use (compared to int), and very, very complicated to write.

But that is a consequence of the daft design of the C language.

The language also doesn't allow use of hardware to handle overflow (and underflow) and doesn't have arrays etc as first class citizens.

I don't believe that it is really a idiot-proof drop-in replacement for the built-in integral types, but it will be fine to be used for new projects, especially as it needed the fancy features of C++14.

I agree that explicit conversions are the Right Thing, but they do carry a cost to the users - the need to understand the issues and take care with construction and assignment because this is a User Defined Type (UDT) and the special-case privileges of built-in do not apply (another daft design decision).  Floating-point and fixed-point UDT have proved unintuitive to use because of explicit conversions; there are big elephant traps awaiting the unwary.

Costly at compile time because of the number of libraries included, but that's a cost worth paying.

I like that the infrastructure might be extended to other than integral types.

>    - What is your evaluation of the implementation?

Above my pay grade.

>    - What is your evaluation of the documentation?

Good exposition of the problems and solutions.

Good examples.

Links to source of examples code would be *very* useful.  Starting with an example is the most common way of 'getting started'.

Will using "" file specification
#include "../include/safe_range.hpp"
instead of <>
#include <../include/safe_range.hpp>
cause trouble for users trying to use/modify the examples in Boost or their own folder position?

The common need for overflow (or underflow) to become 'saturation' == max_value (or min_value) is not an option (but then it is arithmetically 'wrong', I suppose ;-))

>    - What is your evaluation of the potential usefulness of the library?

Very valuable to make programs that 'always work' instead of 'mostly work'.

Users might appreciate a list of compiler versions that really do work.
Sadly 'Compliant C++14' is a bit fuzzy.  (Should become clearer if accepted and test matrix visible).

A good effort at working round fundamental C language defects.

>    - Did you try to use the library? With what compiler? Did you have any    problems?

Had a very brief try with VS 2015 with /std:c++latest added to command line (to try to ensure C++14 compliance) but am stuck with

1>j:\cpp\safe_numeric\safe_numeric\include\interval.hpp(107): error C2737: 'boost::numeric::`anonymous-namespace'::failed_result': 'constexpr' object must be initialized
1>j:\cpp\safe_numeric\safe_numeric\include\safe_base_operations.hpp(131): error C2244: 'boost::numeric::safe_base<T,Min,Max,P,E>::operator =': unable to match function definition to an existing declaration

But I am sure that this is entirely my fault, but I'm out of my time allotted to this review.

Also - Is this warning(s) avoidable/relevant/quietable?

j:\cpp\safe_numeric\safe_numeric\include\safe_base.hpp(233): warning C4814: 'boost::numeric::safe_base<T,Min,Max,P,E>::operator =': in C++14 'constexpr' will not imply 'const'; consider explicitly specifying 'const'

I feel that all warnings should be avoided or supressed using push'n'pop pragmas.

>    - How much effort did you put into your evaluation? A glance?

 A quick  reading.

>    - Are you especially knowledgeable about the problem domain?

No.

> Do you think the library should be accepted as a Boost library?

Yes.

Paul

PS I noted a few typos in documentation (though I doubt these have escaped Steven's fine-tooth-combing ;-).

multiplication etc. . C/C++ often automatically  - extraneous .

similar, We'll  - should be lower case w.

the expression will yield the correct mathematical result - missing .

trap at compiler time all operations which might fail at runtime.  - compile-time

(Better to use compile-time and run-time throughout? Inventing new words is unnecessary and hyphenating is good for readbility)

This solution is simple, Just replace instances    - lower case j

parameters are guarenteed to meet requirements when - guaranteed

our program compiles without error - even when using the trap_exception exception policy  - missing .

We only needed to change two lines of code to achieve our goal missing .

( throw exception, compile time trap, etc..) no implementation should return an arithmetically incorrect result.  - extraneous space after ( and missing . and new sentence No implementation ...

C++ operations permitted on it's base type - should be its  (possessive!)
---
Paul A. Bristow
Prizet Farmhouse
Kendal UK LA8 8AB
+44 (0) 1539 561830







_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Formal review starts today

Boost - Dev mailing list
On 3/9/17 7:23 AM, Paul A. Bristow via Boost wrote:

>>
>> Here are some questions you might want to answer in your review:
>>
>>    - What is your evaluation of the design?
>
> Complicated to use (compared to int),

Hmmm - a major design goal is that it be a no-brainer to use.  I'd like
to see you exand a little on this comment.

and very, very complicated to write.

LOL - it's worse that it looks.  There are a lot of difficulties with
something like this. One is that its hard to stick to the subject and
keep in mind what one's idea of the original purpose of the library is.
But this is common in almost all library efforts.  A major difficulty in
an elaborate TMP project is that the compiler does't really support it
very well.  Error messages are often unrelated to the mistake and very
unhelpful.  There is not convenient way to display messages which
display type information at compile time.  It would be useful to have a
BOOST_STATIC_WARNING that works.  These problems can only really be
addressed by enhancements to the compiler itself.  The fact that none of
these have been proposed suggests to me that TMP is a lot like teenage
sex - there's a lot more talk about it than is actually going on.
>
> But that is a consequence of the daft design of the C language.

tl;dr;

I want to defend the C language design. My first exposure to C was in
1980?  I was introduced to Ratfor which was a fortran implemented C like
language introduced in the Plauger book "Software Tools".  After that I
lusted after a C language tool.  The machine I had at my disposal was a
Data General Eclipse - a 16 bit architecture.  I got a hold source code
for a compiler for the Z80 and managed to port it to the DG machine.
Then I managed to compile the standard source which my compiler.  I was
hooked!  The C language then was the world's first "portable assembler".
  This was huge at the time.  BTW - it can (and often is) used in this
manner.

The "problem" is that evolved the orignal C language into something
entirely different. - A system for describing and implementing higher
order ideas in an abstract and portable manner.  It was done through
evolution.  Dozens (hundreds) of attempts to build something like that
from scratch (fortran, cobol, pascal, modula, Ada) all failed for some
reason or another.  I believe in evolution.  I believe it is the source
of progress and success in all things - but it always leaves me
disappointed and wanting something more perfect - which I can never get.

> The language also doesn't allow use of hardware to handle overflow (and underflow)

When I started this project I was thinking that things would be better
if hardware handled these things.  But now that I look what I've
achieved, I think I was wrong.  A key feature of this library is that it
allows one to select how to handle these and other similar situations.
Currently one can detect and handle these at runtime, or detect
potential problems and fail a compile time, promote types to larger
types to avoid the problems - (Thereby "fixing" the original "daft"
design of the C language - and likely creating your own).  Perhaps
others ideas are possible.  So have been mentioned - "saturation" for
example.

and doesn't have arrays etc as first class citizens.

I don't agree with this - but arguing about 30 year old decisions is a
sign of old age.

> I don't believe that it is really a idiot-proof drop-in replacement for the built-in integral types,

Well, that is what I've intended and I think I've mostly achieved it.

but it will be fine to be used for new projects, especially as it needed
the fancy features of C++14.

tl;dr;

One of the major use cases I see for this is finding errors in legacy
code.  I included a small example which touches on this.  I actually
have a larger example - the code is in there.  But since its a real (and
realistic example - stepping motor controller running on an 8 bit PIC
microcontroller) it was a little more complex than normal and I didn't
have time to finish the narrative which goes with the example.  I hope
get back to it in the future.

I have done multiple projects with such tiny micro controllers.  One big
issue with these is the development environment which doesn't lend
itself to unit testing.  Mostly this is addressed just by skipping unit
testing - using the burn (the chip) and crash (the program - but these
days it might mean the car itself).  But I like to factor out critical
pieces into modules which can compiled and unit tested. An can create
test vectors which run all the test cases and end points.  When a unit
test fails I can step through with the debugger.  The cpp policy
implements safe types and checking with the correct sizes for the
microcontoller. I can compile and debug the tests in an environment
which closely tracks the target one.

By using the compile time trap as the exception policy  I can also
detect potential failures and tweak my code so that they can never
happen.  Then I can build and run on the target environment with much
more confidence that there are no bugs.

To me this is absolutly huge.  But it's a very, very, very tough sell.
People who work in the microcontroller world see C as portable
assembler.  People who work in C++/Boost world see C++ as a way of
addressing general problems in a more abstract (mathematical like way).

I'm hoping to bridge two worlds here.  I'm not hopeful.  I'm a sucker
for lost causes.

>
> I agree that explicit conversions are the Right Thing,

I used to believe that.  Now my view is "not always"

but they do carry a cost to the users -
the need to understand the issues and take care with construction and
assignment because this is a User Defined Type (UDT)
and the special-case privileges of built-in do not apply (another daft
design decision).

Floating-point and fixed-point UDT have proved unintuitive to use
because of explicit conversions;
there are big elephant traps awaiting the unwary.

Right - I'm thinking that that library traps at compile time when one
does the following.  I'll have to recheck this.

safe<int> i;
i = 1.0

but accepts
i = static_cast<int>(1.0)

> Costly at compile time because of the number of libraries included, but that's a cost worth paying.

and I don't think it's all the costly.  I'm using a 3 year old mac mini
with Xcode/clang for development and all tests compile in a few seconds.
  Not a big deal in my opinion.

>
> I like that the infrastructure might be extended to other than integral types.
>
>>    - What is your evaluation of the implementation?
>
> Above my pay grade.

LOL - that makes two of us.  That's why we have obi-wan-watanbi

>>    - What is your evaluation of the documentation?
>
> Good exposition of the problems and solutions.
>
> Good examples.
>
> Links to source of examples code would be *very* useful.
easy one - I'll do this.  I'll also add the output from the examples
which is another idea that has been mentioned.

   Starting with an example is the most common way of 'getting started'.

>
> Will using "" file specification
> #include "../include/safe_range.hpp"
> instead of <>
> #include <../include/safe_range.hpp>
> cause trouble for users trying to use/modify the examples in Boost or their own folder position?

The version being reviewed in the incubator is meant to be built and
tested outside the boost infrastructure.  The boost testing infracture
supports the creating of the test matrix and centralized testing.  But
in opinion it is an impeditment to one who just wants to test a single
library not already included in boost.

Boost libraries sort of "presume" the centralized setup promoted by b2
headers.  I believe that this is also an obstacle to
mixing/matching/testing other libraries.

In any case, if this library is accepted, include file structure will
have to evolve to the boost way of doing things.  Not a big issue or
problem.

> The common need for overflow (or underflow) to become 'saturation' == max_value (or min_value) is not an option (but then it is arithmetically 'wrong', I suppose ;-))
Right - This has come up.  But I declined to address this for a few reasons.

a) Things are already really complex - I'm at my brain's limit
b) I don't want to pollute the overriding idea:

"This library provides a method for the brain dead to guarantee that his
program will not produce incorrect arithmetic results."

Adding more "features" dilutes the conceptual power of thee library,
This makes it impossible to describe what it does in one sentence.
>
>>    - What is your evaluation of the potential usefulness of the library?
>
> Very valuable to make programs that 'always work' instead of 'mostly work'.

Ahhh - this is the thing for me.  Imaging how your going to feel when
your microcontroller is included in a product which ships 10000 units
and a bug is discovered?  How are you going to feel if that product is
an automobile accelerator or Mars landing probe? How, after 50 years of
computer systems and computer language development, arrived at this
situation.  I'm mystified and disheartened.

> Users might appreciate a list of compiler versions that really do work.
> Sadly 'Compliant C++14' is a bit fuzzy.  (Should become clearer if accepted and test matrix visible).

LOL - I try to follow the standard - but it's not really possible to
even read it.  I rely on testing - known limitations.  But it's the best
we have.  So far it's worked pretty well.  I can build on Clang with no
warnings and on recent versions of GCC and produce the exact same
results.  I would love to test on VC but I don't have such a machine.
I've tried to set up appveyor.yml to do this but have been unsuccessful.
  I'm interested if someone want's to look into this.

>
> A good effort at working round fundamental C language defects.

a good effort - a ringing endorsement !  It's ok one of my squash
partners - a female - says I'm a "good sport". doesn't bug me though.

>
>>    - Did you try to use the library? With what compiler? Did you have any    problems?
>
> Had a very brief try with VS 2015 with /std:c++latest added to command line (to try to ensure C++14 compliance) but am stuck with

I'd be curious to see the example you tried.  Did you try to run the
test suite and/or examples?

But this re-enforce the suggestion that I should put the concept
checking - such as it is - into the library code to help detect usage
mis-understandings.  I'll look into this.
>
> 1>j:\cpp\safe_numeric\safe_numeric\include\interval.hpp(107): error C2737: 'boost::numeric::`anonymous-namespace'::failed_result': 'constexpr' object must be initialized
> 1>j:\cpp\safe_numeric\safe_numeric\include\safe_base_operations.hpp(131): error C2244: 'boost::numeric::safe_base<T,Min,Max,P,E>::operator =': unable to match function definition to an existing declaration
>
> But I am sure that this is entirely my fault, but I'm out of my time allotted to this review.

I don't think it's your fault.  I think that's an indication that I've
fallen short of my goal that the library be idiot proof.  I shouldn't
have to say this but this not mean that I think your an idiot, but that
if it's not simple for you, it's not going to be simple for an idiot -
which is my goal.

This re-enforces the suggestion that I should put the concept checking -
such as it is - into the library code to help detect usage
mis-understandings.  I'm look into this.

>
> Also - Is this warning(s) avoidable/relevant/quietable?
>
> j:\cpp\safe_numeric\safe_numeric\include\safe_base.hpp(233): warning C4814: 'boost::numeric::safe_base<T,Min,Max,P,E>::operator =': in C++14 'constexpr' will not imply 'const'; consider explicitly specifying 'const'
>

This looks very easy to fix - but my compiler doesn't emit these
warnings.  Maybe it would with some switch.  I'll look in to it.

> I feel that all warnings should be avoided or supressed using push'n'pop pragmas.

My goal is to closely stick to C++14.  So I shouldn't have any warnings.
I believe this is possible with some minor code tweaking.
>

Thanks for your comments.  They are very, very helpful.

> ---
> Paul A. Bristow
> Prizet Farmhouse
> Kendal UK LA8 8AB
> +44 (0) 1539 561830
>
>
>
>
>
>
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Formal review starts today

Boost - Dev mailing list
> -----Original Message-----
> From: Boost [mailto:[hidden email]] On Behalf Of Robert Ramey via Boost
> Sent: 09 March 2017 18:32
> To: [hidden email]
> Cc: Robert Ramey
> Subject: Re: [boost] [safe_numerics] Formal review starts today
>
> On 3/9/17 7:23 AM, Paul A. Bristow via Boost wrote:
>
> >> Here are some questions you might want to answer in your review:
> >>    - What is your evaluation of the design?

> > Complicated to use (compared to int),
>
> Hmmm - a major design goal is that it be a no-brainer to use.  I'd like
> to see you exand a little on this comment.

One needs to fully understand static_cast - it's an unintuitive name for something that has unexpectedly complicated implications.  
 

> > But that is a consequence of the daft design of the C language.
>
> tl;dr;
>
> I want to defend the C language design. My first exposure to C was in
> 1980?  I was introduced to Ratfor which was a fortran implemented C like
> language introduced in the Plauger book "Software Tools".  After that I
> lusted after a C language tool.  The machine I had at my disposal was a
> Data General Eclipse - a 16 bit architecture.  I got a hold source code
> for a compiler for the Z80 and managed to port it to the DG machine.
> Then I managed to compile the standard source which my compiler.  I was
> hooked!  The C language then was the world's first "portable assembler".
>   This was huge at the time.  

I can't resist saying that my reaction to learning of C design philosophy was LOL :-(

But programmers liked quick'n'dirty and being trusted  (the biggest mistake of all),
a host of .edu starting teaching C, and like FORTRANs bottomless pit of working subroutines,
it was unstoppable.

And so here we are putting Band-Aids on the C++ and STL Band-Aids*.

> I'm hoping to bridge two worlds here.  I'm not hopeful.  I'm a sucker for lost causes.

I'm optimistic.  I'm not the only one getting cross with 'mostly work' programs,
and as you observe, people will get *really cross* with 'mostly work' killer cars.
Chris Kormanyos has publicised how to use recent C++ on bare-metal in his book Real-Time C++.

> > Users might appreciate a list of compiler versions that really do work.

They really *must* have such a list.

> >>    - Did you try to use the library? With what compiler? Did you have any    problems?

> > Had a very brief try with VS 2015  and failed.

John Maddock has since explained why nothing I tried worked.  I'm a bit shocked that it hasn't been tested on MSVC.  My acceptance
was on the assumption that it would work.  It really must be portable over recent GCC, Clang and MSVC at the very minimum.

I suggest that we should pause the review until you adopt John's patches and reissue the review code and then restart the review.

It'll be a bit poor to accept the library until a few people confirm it's working on MSVC.

Keep going!

Paul

* Band-Aid Trademark of Johnson and Johnson ;-)
---
Paul A. Bristow
Prizet Farmhouse
Kendal UK LA8 8AB
+44 (0) 1539 561830




_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Formal review starts today

Boost - Dev mailing list
2017-03-11 15:27 GMT+01:00 Paul A. Bristow via Boost <[hidden email]>
:

> > -----Original Message-----
> > From: Boost [mailto:[hidden email]] On Behalf Of Robert
> Ramey via Boost
> > Sent: 09 March 2017 18:32
> > To: [hidden email]
> > Cc: Robert Ramey
> > Subject: Re: [boost] [safe_numerics] Formal review starts today
> >
> > On 3/9/17 7:23 AM, Paul A. Bristow via Boost wrote:
> >
> > >> Here are some questions you might want to answer in your review:
> > >>    - What is your evaluation of the design?
>
> > > Complicated to use (compared to int),
> >
> > Hmmm - a major design goal is that it be a no-brainer to use.  I'd like
> > to see you exand a little on this comment.
>
> One needs to fully understand static_cast - it's an unintuitive name for
> something that has unexpectedly complicated implications.
>
> > > But that is a consequence of the daft design of the C language.
> >
> > tl;dr;
> >
> > I want to defend the C language design. My first exposure to C was in
> > 1980?  I was introduced to Ratfor which was a fortran implemented C like
> > language introduced in the Plauger book "Software Tools".  After that I
> > lusted after a C language tool.  The machine I had at my disposal was a
> > Data General Eclipse - a 16 bit architecture.  I got a hold source code
> > for a compiler for the Z80 and managed to port it to the DG machine.
> > Then I managed to compile the standard source which my compiler.  I was
> > hooked!  The C language then was the world's first "portable assembler".
> >   This was huge at the time.
>
> I can't resist saying that my reaction to learning of C design philosophy
> was LOL :-(
>
> But programmers liked quick'n'dirty and being trusted  (the biggest
> mistake of all),
> a host of .edu starting teaching C, and like FORTRANs bottomless pit of
> working subroutines,
> it was unstoppable.
>
> And so here we are putting Band-Aids on the C++ and STL Band-Aids*.
>
> > I'm hoping to bridge two worlds here.  I'm not hopeful.  I'm a sucker
> for lost causes.
>
> I'm optimistic.  I'm not the only one getting cross with 'mostly work'
> programs,
> and as you observe, people will get *really cross* with 'mostly work'
> killer cars.
> Chris Kormanyos has publicised how to use recent C++ on bare-metal in his
> book Real-Time C++.
>
> > > Users might appreciate a list of compiler versions that really do work.
>
> They really *must* have such a list.
>
> > >>    - Did you try to use the library? With what compiler? Did you have
> any    problems?
>
> > > Had a very brief try with VS 2015  and failed.
>
> John Maddock has since explained why nothing I tried worked.  I'm a bit
> shocked that it hasn't been tested on MSVC.  My acceptance
> was on the assumption that it would work.  It really must be portable over
> recent GCC, Clang and MSVC at the very minimum.
>

According to formal Boost criteria, it is sufficient for the library to
work on two major compilers. These formal criteria are met by
safe_numerics. Of couse, I acknoledge, that formal criteria are not the ony
thing in the world.


>
> I suggest that we should pause the review until you adopt John's patches
> and reissue the review code and then restart the review.
>

From the formal point of view, the two options for this I can see are:

   - To conclude the review as rejected, and schedule a new one.
   - Accept the library conditionally, and make the fix a hard condition/

It'll be a bit poor to accept the library until a few people confirm it's
> working on MSVC.
>

Accepting the library does not mean it is immediately available in the next
Boost release. If the library is accepted conditionally, you would be
guaranteed that the users will get MSVC support (if adding this support is
doable).

Regards,
&rzej;

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Formal review starts today

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
AMDG

On 03/11/2017 07:27 AM, Paul A. Bristow via Boost wrote:
>
> John Maddock has since explained why nothing I tried worked.  I'm a bit shocked that it hasn't been tested on MSVC.  My acceptance
> was on the assumption that it would work.  It really must be portable over recent GCC, Clang and MSVC at the very minimum.
>
> I suggest that we should pause the review until you adopt John's patches and reissue the review code and then restart the review.
>
> It'll be a bit poor to accept the library until a few people confirm it's working on MSVC.
>

  VC2017 (which is the minimum required to build the library,
even with patches) wasn't released until halfway through
the review.

In Christ,
Steven Watanabe


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [safe_numerics] Formal review starts today

Boost - Dev mailing list
In reply to this post by Boost - Dev mailing list
2017-03-11 15:41 GMT+01:00 Andrzej Krzemienski <[hidden email]>:

>
>
> 2017-03-11 15:27 GMT+01:00 Paul A. Bristow via Boost <
> [hidden email]>:
>
>>
>>
>> John Maddock has since explained why nothing I tried worked.  I'm a bit
>> shocked that it hasn't been tested on MSVC.  My acceptance
>> was on the assumption that it would work.  It really must be portable
>> over recent GCC, Clang and MSVC at the very minimum.
>>
>
> According to formal Boost criteria, it is sufficient for the library to
> work on two major compilers. These formal criteria are met by
> safe_numerics. Of couse, I acknoledge, that formal criteria are not the ony
> thing in the world.
>
>
>>
>> I suggest that we should pause the review until you adopt John's patches
>> and reissue the review code and then restart the review.
>>
>
> From the formal point of view, the two options for this I can see are:
>
>    - To conclude the review as rejected, and schedule a new one.
>    - Accept the library conditionally, and make the fix a hard condition/
>
> It'll be a bit poor to accept the library until a few people confirm it's
>> working on MSVC.
>>
>
> Accepting the library does not mean it is immediately available in the
> next Boost release. If the library is accepted conditionally, you would be
> guaranteed that the users will get MSVC support (if adding this support is
> doable).
>
> Regards,
> &rzej;
>

Still, it may be a good idea to implement the MSVC fix from John
immediately, and give opportunity for people to test it for the next few
days.

Regards,
&rzej;

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost