Discussion:
[boost] [Safe Numerics] Review
John Maddock via Boost
2017-03-11 19:37:46 UTC
Permalink
OK, here's my review of Robert's library.

By way of something different, I've tried to not read the docs (though I
ended up scanning them in the end) and just tried to use the library.
Here's what I've found:


1) Using:

safe<int>(-1) & 1

I get a hard assert failure at safe_base_operations.hpp:1373. I'm
assuming that this is a mistake, and the usual error handlers should be
called? Bitwise or has the same issue. More tests required ;)

2) I'm not sure if this is an msvc issue but in:

constexpr safe_base operator~()

The static assert is triggered for *unsigned types* and not for signed.
Even if this is a compiler issue, it indicates a missing test case or two.

3) If I assign a float to an integer, then I get the error: "conversion
of integer to float loses precision" which isn't quite what you meant to
say.
More particularly, for float conversions I would expect:

* Conversion from a float holding an integer value that's within the
range of the target type to always succeed.
* Conversion from a float holding a non-integral value to conditionally
succeed (with trunction) depending on the policy in effect.
* Conversion *to* a float may also fail if the integer is outside the
range of the float (no checking may be required if the number of bits in
the integer is less than the number of bits in the float).

4) Is constexpr arithmetic supposed to be supported? I tried:

constexpr boost::numeric::safe<int> j = 0;
constexpr boost::numeric::safe<int> k = 3;
constexpr boost::numeric::safe<int> l = j + k;

Which generates an internal compiler error for msvc, and for gcc-5.3 I get:

../../../include/safe_base_operations.hpp:332:35: error: call to
non-constexpr function ‘void boost::numeric::dispatch(const
boost::numeric::checked_result<R>&) [with EP =
boost::numeric::throw_exception; R = int]’
dispatch<exception_policy>(r);

Similarly if I try to add a literal to a safe<int>. Again as the tests
all pass, so I'm assuming they're missing constexpr tests?

5) What is the purpose of class safe_literal? constepxr initialization
seems to work just fine without it?

6) In the docs, the example of using error handler dispatch is nonesense
(top of page 41)?

7) Related to the above, I see the need to be able to define your own
primitives which behave just like yours.
The example I have is for bit-scan-left/right which has as precondition
that the argument is non-zero, but there are other bit-fiddling
operations (bit count etc) which some folks may require. A "cookbook"
example would be good.

8) In the tests, you have an undocumented function "log" in the
library. Not sure if this is intended to be part of the API or not, but
IMO it's misnamed. ilog2 seems to a semi-standard name for this.

9) By way of testing the libraries performance I hooked it into the
Millar-Rabin primality test in the multiprecision lib (test attached, it
counts how many probable primes there are below 30000000), here are the
times for msvc:

Testing type unsigned:
time = 17.6503 seconds
count = 1857858
Testing type safe<unsigned>:
time = 83.5055 seconds
count = 1857858
Testing type 32-bit cpp_int:
time = 36.9026 seconds
count = 1857858

and for GCC-MINGW-5.3.0:

Testing type unsigned:
time = 17.1037 seconds
count = 1857858
Testing type unsigned:
time = 18.4377 seconds
count = 1857858
Testing type unsigned:
time = 21.0592 seconds
count = 1857858

I'm surprised by the difference in times between msvc and gcc, also
deeply impressed that there is effectively no performance penalty with
GCC (I was expecting there should be some!)

10) I don't find the scope of the library over-broad, in fact I probably
find it about right wrt the policies etc.

11) There were a few traits and such that it would be nice to have, the
problem as ever, is what to call them and where to put them:

* A trait for extracting the underlying type of a safe<> - use case for
example where you enable_if a function on is_safe so don't have the
template argument list to safe<>.
* Do we need a function for extracting the underlying type of a safe<>?
Or is casting to the underlying type optimal? Actually, maybe having a
function would negate the need
for the above trait, as we could just write 'auto x = mysafe.value();' ?
* Do we need traits equivalent to
is_signed/is_unsigned/make_signed/make_unsigned? My gut feeling is yes
we do, the issue is what to call them, and where to put them -
we're not allowed to specialize the type_traits versions (either boost
or std), but we would need a central "one true trait" version that can
be specialized by other libraries
as well (Multiprecision for example). We've backed ourselves into a
corner over this I fear!

12) When calling boost::integer::gcd/lcm the sub-optimal(?) non-binary
versions are called for safe<> - you might want to look into seeing what
traits
need to be defined to ensure that the same version as the underlying
type is called.

Those are all the issues I can see for now... in conclusion, as for
whether the library should be accepted, yes I believe it should be,
modulo the issues above.

Regards, John.






---
This email has been checked for viruses by AVG.
http://www.avg.com
Robert Ramey via Boost
2017-03-11 21:35:51 UTC
Permalink
Post by John Maddock via Boost
OK, here's my review of Robert's library.
By way of something different, I've tried to not read the docs (though I
ended up scanning them in the end) and just tried to use the library.
safe<int>(-1) & 1
I get a hard assert failure at safe_base_operations.hpp:1373. I'm
assuming that this is a mistake, and the usual error handlers should be
called? Bitwise or has the same issue. More tests required ;)
bit wise operators have issues both in concept and implementation which
I'm in the process of addressing.
Post by John Maddock via Boost
constexpr safe_base operator~()
The static assert is triggered for *unsigned types* and not for signed.
Even if this is a compiler issue, it indicates a missing test case or two.
I've been as how to address bit wise operations on signed integers. I've
been thinking that they are an error which should be trapped at compile
time. First of all I don't seen a use case for them and second they
have portability problems. the results are arithmetically different
depending on the size of the int. I can do it either way, it's really a
question about what the library is meant to do - insist on arithmetical
consistence and correctness - or ... what? For now I've trapped attempts
to use bit wise operations on signed integers and permitted them on
unsigned integers. But as you can see, i'm conflicted here. I wish we
had static_warning in C++ but we don't.
Post by John Maddock via Boost
3) If I assign a float to an integer, then I get the error: "conversion
of integer to float loses precision" which isn't quite what you meant to
say.
* Conversion from a float holding an integer value that's within the
range of the target type to always succeed.
* Conversion from a float holding a non-integral value to conditionally
succeed (with trunction) depending on the policy in effect.
* Conversion *to* a float may also fail if the integer is outside the
range of the float (no checking may be required if the number of bits in
the integer is less than the number of bits in the float).
my implementation of this is sort of an afterthought and needs
attention. It's similar to the case as above. It's not clear what I
want to do - even to me.

int i = 1.0

I wonder what we're really doing here. In C++/C i get it. But when I
write in C++ the value 1.0 I'm mean a value which in some sense
approximates 1.0. I think it's mistake.

int j = 1.5

This is pretty clear to me. The fundamental premise of the library
would be stated.

"All expressions which change the arithmetic value are considered errors"

On the other hand, I might be OK with

int k = floor(1.5);

I'm still sorting out how I feel about going back and forth between
floats and integers.
Post by John Maddock via Boost
constexpr boost::numeric::safe<int> j = 0;
constexpr boost::numeric::safe<int> k = 3;
constexpr boost::numeric::safe<int> l = j + k;
../../../include/safe_base_operations.hpp:332:35: error: call to
non-constexpr function ‘void boost::numeric::dispatch(const
boost::numeric::checked_result<R>&) [with EP =
boost::numeric::throw_exception; R = int]’
dispatch<exception_policy>(r);
I would expect this work and be a constexpr. But I get the same result
on clang. I looked into this a little bit.

constexpr boost::numeric::safe<int> j = 0; isn't constexpr either
because the "constexprness" is lost with literal values. For this reason
I created safe_signed_literal<0>

void f1(){
using namespace boost::numeric;
constexpr safe<int> j = 0;
constexpr safe<int> k = 3;
constexpr safe<int> l = j + k; // compile error
}

J + k can exceed the range of int and this can only be detected at
runtime. So l = j + k is not a constexpr expression.

For this I created safe_signed_literal<0> to be used in

constexpr boost::numeric::safe<int> j = safe_signed_literal<0>;

But this doesn't quite do it because in assignment to j = the
safe_signed_literal (with a range of [0,0]) get's transformed to a
variable with range of [0,65535].

then j + l has to include code
Post by John Maddock via Boost
Similarly if I try to add a literal to a safe<int>. Again as the tests
all pass, so I'm assuming they're missing constexpr tests?
I don't think it's so much a matter of missing the tests but forgetting
the limitations of constexpr.
Post by John Maddock via Boost
5) What is the purpose of class safe_literal? constepxr initialization
seems to work just fine without it?
safe<short> = j returns at value with a compile time range of [0,65353].
So the range of 2 * j won't fit in a short any more and things will have
to be checked at runtime.

But if I use
safe_signed_literal<0> j;

it will be rendered as a std::int8_t with a range of [0,0]. Then the
range of j * j result in a safe_signed_range<0, 0, ...> which is known
at compile time to never overflow so run time checking is not necessary.

using expressions that start out with safe...literal will mean
expressions which are calculated using the smallest types possible -
faster code (usually) with no error checking required.

It's unfortunate that when safe<int> is constructed with literal values,
the result is not constexpr - the "constexpr" ness isn't transmitted to
literal function arguments. BUT we do have safe_literal which does
retain this information. The following examples should show how this works

// your example - doesn't work
void f1(){
using namespace boost::numeric;
constexpr safe<int> j = 0;
constexpr safe<int> k = 3;
constexpr safe<int> l = j + k; // compile error
}

// damn! - still doesn't work. intermediate values lose range
void f2(){
using namespace boost::numeric;
constexpr safe<int> j = boost::numeric::safe_signed_literal<0>();
constexpr safe<int> k = boost::numeric::safe_signed_literal<3>();
constexpr safe<int> l = j + k; // compile error
}

// damn! - safe literals by default have void policies so the
// binary operations require one operand to have one non-void
// policy of each type.

void f3(){
using namespace boost::numeric;
safe_signed_literal<0> j;
safe_signed_literal<3> k;
constexpr auto l = safe_signed_literal<3>();
constexpr const safe<int> l2 = j + k; // fails because no policies
}

// joy - smooth as silk - no interrupts and constexpr expression.
void f3(){
using namespace boost::numeric;
safe_signed_literal<0, native, trap_exception> j;
safe_signed_literal<3> k;
constexpr auto l = safe_signed_literal<3>();
constexpr const safe<int> l2 = j + k;
}

// auto can come in handy also!
void f4(){
using namespace boost::numeric;
constexpr auto j = safe_signed_literal<0, native, trap_exception>();
constexpr auto k = safe_signed_literal<3>();
constexpr auto l = j + k;
}
Post by John Maddock via Boost
6) In the docs, the example of using error handler dispatch is nonesense
(top of page 41)?
need more context to find this.
Post by John Maddock via Boost
7) Related to the above, I see the need to be able to define your own
primitives which behave just like yours.
The example I have is for bit-scan-left/right which has as precondition
that the argument is non-zero, but there are other bit-fiddling
operations (bit count etc) which some folks may require. A "cookbook"
example would be good.
I used to have a lot of these. I boiled it down to a few that I
actually use in "utility.hpp" I think that there's a case for as small
boost library if type utilities but I don't think this is the place for it.
Post by John Maddock via Boost
8) In the tests, you have an undocumented function "log" in the
library. Not sure if this is intended to be part of the API or not, but
IMO it's misnamed. ilog2 seems to a semi-standard name for this.
it's an implementation detail. Stephen has suggested that I use a
different name for this and I'm inclined to use bit_count.
Post by John Maddock via Boost
9) By way of testing the libraries performance I hooked it into the
Millar-Rabin primality test in the multiprecision lib (test attached, it
counts how many probable primes there are below 30000000), here are the
time = 17.6503 seconds
count = 1857858
time = 83.5055 seconds
count = 1857858
time = 36.9026 seconds
count = 1857858
time = 17.1037 seconds
count = 1857858
time = 18.4377 seconds
count = 1857858
time = 21.0592 seconds
count = 1857858
I'm surprised by the difference in times between msvc and gcc, also
deeply impressed that there is effectively no performance penalty with
GCC (I was expecting there should be some!)
Sounds too good to be true. Are you sure you tested it correctly - it
says type unsigned for the second test - I expect to see safe<unsigned>.
similar for the last test. I'm not sure what cpp_int means - is the
promotion policy?

If you had nothing else to do you could tweak the code a little to use
safe_..._range and/or safe_literal - maybe even automatic and constexpr
where possible. I would expect/hope this would yield a measurably
faster program. I have no idea how much effort this would consume.

BTY the "trap" exception policy is fun to experiment with - Really.
Post by John Maddock via Boost
10) I don't find the scope of the library over-broad, in fact I probably
find it about right wrt the policies etc.
11) There were a few traits and such that it would be nice to have, the
* A trait for extracting the underlying type of a safe<> - use case for
example where you enable_if a function on is_safe so don't have the
template argument list to safe<>.
I do have an is_safe which is important but used only internally
Post by John Maddock via Boost
* Do we need a function for extracting the underlying type of a safe<>?
base_type<T>::type does the job - used but not documented.
Post by John Maddock via Boost
Or is casting to the underlying type optimal?
it is optimal. But you might not always know what the underlying type
is. See safe_..._range<MIN, MAX> which uses the smallest type that will
hold the range. Before you say anything about this - I considered using
the fastest but for no good reason I chose the smallest.
Post by John Maddock via Boost
Actually, maybe having a
function would negate the need
for the above trait, as we could just write 'auto x = mysafe.value();' ?
there is also
template<
class T,
T Min,
T Max,
class P,
class E
constexpr T base_value(
const safe_base<T, Min, Max, P, E> & st
) {
return static_cast<T>(st);
}

used internally but not documented. So you could easily say

safe<int> i = 42;
auto x = base_value(i); // returns integer 42
Post by John Maddock via Boost
* Do we need traits equivalent to
is_signed/is_unsigned/make_signed/make_unsigned? My gut feeling is yes
we do, the issue is what to call them, and where to put them -
we're not allowed to specialize the type_traits versions (either boost
or std), but we would need a central "one true trait" version that can
be specialized by other libraries
as well (Multiprecision for example). We've backed ourselves into a
corner over this I fear!
These are used extensively internally. I've specialized numeric_limits
for this purpose.

auto max = std::numeric_limits<safe<int>>::max(); // works as advertised.

so the following is very useful

auto max = std::numeric_limits<safe_signed_range<0,42>>::max();
max = 42
base_type<safe_signed_range<0,42>>::type returns uint8_t

I guess it would be interesting to make these official
Post by John Maddock via Boost
12) When calling boost::integer::gcd/lcm the sub-optimal(?) non-binary
versions are called for safe<> - you might want to look into seeing what
traits
need to be defined to ensure that the same version as the underlying
type is called.
I'll have a look
Post by John Maddock via Boost
Those are all the issues I can see for now... in conclusion, as for
whether the library should be accepted, yes I believe it should be,
modulo the issues above.
Regards, John.
_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
John Maddock via Boost
2017-03-12 11:16:49 UTC
Permalink
Post by Robert Ramey via Boost
Post by John Maddock via Boost
OK, here's my review of Robert's library.
By way of something different, I've tried to not read the docs (though I
ended up scanning them in the end) and just tried to use the library.
safe<int>(-1) & 1
I get a hard assert failure at safe_base_operations.hpp:1373. I'm
assuming that this is a mistake, and the usual error handlers should be
called? Bitwise or has the same issue. More tests required ;)
bit wise operators have issues both in concept and implementation
which I'm in the process of addressing.
Post by John Maddock via Boost
constexpr safe_base operator~()
The static assert is triggered for *unsigned types* and not for signed.
Even if this is a compiler issue, it indicates a missing test case or two.
I've been as how to address bit wise operations on signed integers.
I've been thinking that they are an error which should be trapped at
compile time. First of all I don't seen a use case for them and
second they have portability problems. the results are arithmetically
different depending on the size of the int. I can do it either way,
it's really a question about what the library is meant to do - insist
on arithmetical consistence and correctness - or ... what? For now
I've trapped attempts to use bit wise operations on signed integers
and permitted them on unsigned integers. But as you can see, i'm
conflicted here. I wish we had static_warning in C++ but we don't.
Bitwise operations on signed types are well defined *as long as the
value is positive*, as soon as the sign bit is set, or you use an
operation which would touch it some way then you have undefined
behaviour. For positive values, bitwise ops are genuinely useful too -
the usual case is the test for even/oddness with `x & 1`.
Post by Robert Ramey via Boost
the results are arithmetically different depending on the size of the int
Huh? I don't think so, not for positive values.

While we're at it, operator^ doesn't behave as expected:

safe<int>(4) ^ 1

Generates a static assert:

"error C2338: safe type cannot be constructed with this type"

Leaving aside the fact that I don't understand the error message, the
result of 4 ^1 is 5 and certainly can be constructed for that type.

So I believe the correct behaviour should be:

* Bitwise operations on signed types should be allowed by default as
long as they don't invoke undefined behaviour by operating on (or
generating) negative values - this should be a runtime check.
* operator ~ always touches the sign bit so is undefined for signed
types - however your static assert currently fails for *unsigned* types
(at least with msvc).
* It would be nice to be able to alter the behaviour via a static_assert
- is it possible for the error policy to static assert on a bitwise op
on a signed type - so that all bitwise ops on signed types become
compiler errors?

Regards, John.

---
This email has been checked for viruses by AVG.
http://www.avg.com


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov via Boost
2017-03-12 13:20:01 UTC
Permalink
* It would be nice to be able to alter the behaviour via a static_assert -
is it possible for the error policy to static assert on a bitwise op on a
signed type - so that all bitwise ops on signed types become compiler
errors?
Isn't this what trap_exception is supposed to do?


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov via Boost
2017-03-12 14:37:26 UTC
Permalink
* Bitwise operations on signed types should be allowed by default as long
as they don't invoke undefined behaviour by operating on (or generating)
negative values - this should be a runtime check.
Agreed.

The amusing consequence (I'm easily amused) is that one will be able to
check that a safe<int> x is nonnegative with x | 0, a well known Javascript
idiom.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
John Maddock via Boost
2017-03-12 11:49:54 UTC
Permalink
Post by Robert Ramey via Boost
Post by John Maddock via Boost
3) If I assign a float to an integer, then I get the error: "conversion
of integer to float loses precision" which isn't quite what you meant to
say.
* Conversion from a float holding an integer value that's within the
range of the target type to always succeed.
* Conversion from a float holding a non-integral value to conditionally
succeed (with trunction) depending on the policy in effect.
* Conversion *to* a float may also fail if the integer is outside the
range of the float (no checking may be required if the number of bits in
the integer is less than the number of bits in the float).
my implementation of this is sort of an afterthought and needs
attention. It's similar to the case as above. It's not clear what I
want to do - even to me.
int i = 1.0
I wonder what we're really doing here. In C++/C i get it. But when I
write in C++ the value 1.0 I'm mean a value which in some sense
approximates 1.0. I think it's mistake.
int j = 1.5
This is pretty clear to me. The fundamental premise of the library
would be stated.
"All expressions which change the arithmetic value are considered errors"
On the other hand, I might be OK with
int k = floor(1.5);
I'm still sorting out how I feel about going back and forth between
floats and integers.
Nod.

It's fairly common to see in floating point code, something like:

int integer_part = my_float;
double fractional_part = my_float - integer_part;

which is of course pretty seriously unsafe :(

Casting to a safe<int> to ensure there is no overflow would simplify a
lot of error checking code.

Now we could insist that my_float is passed through
floor/ceil/trunc/round before the conversion, and that's OK by me even
though the function call is superfluous in this instance, but since we
don't know where a floating point value has been, we can't actually
allow that, without also allowing assignment from any old float (albeit
with runtime checking).

BTW for the record, Boost.Math uses itrunc/ltrunc/lltrunc for this
operation which throws when the float is too large for the target type.
Lots of ways to tackle this, I just wanted you to be aware that there
are genuine use cases where float-interoperability is useful and needed.

I guess there are several possible solutions:

Policies: we have 3 levels of checks:
* Permissive, allow all conversions from float and truncate (as long as
the truncated value is in range).
* Strict, only allow conversion from in-range integral values. Values
with fractional parts throw.
* Never: static assert on any attempt to convert from float.

An alternative is to use external functions, lets say we have:

template <class I, class F>
safe<I> safe_trunc(F f);

Which throws only when the truncated value is out of range. Likewise we
would need safe_round, safe_floor and safe_ceil. Obviously a certain
amount of "bike shed colouring" may be required on those names ;)

John.

---
This email has been checked for viruses by AVG.
http://www.avg.com


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Robert Ramey via Boost
2017-03-12 16:10:12 UTC
Permalink
There's another issue here as well. I've tried to shy a way from making
the presumption that all arithmetic is two's complement. When I think
about the cases you cite, I then have to evaluate them in light of this
view. Going down this path leads me to a chamber with mulitple exits -
all alike. When I find myself here, I've reverted to the text of the
standard - which is more restrictive. (negative shift? for me no
problem - but the standard says it's undefined. The real question for
me here is should I

a) insist that all code be standards conforming and not engage in
undefined behavior

b) permit code which is in common usage (like shifting negative numbers)
but is undefined by the standard.

I've tended toward the former for a couple of reasons. Undefined
behavior is non-portable - even though it's likely portable in the 3 or
4 compilers which occupy the current market.

I've become aware that compiler writers are using undefined bevavior to
optimize out related code - apparently without letting use know. If
this is true, we're laying a whole new minefield.

Sooooo - I've been inclined to disallow behavior which makes sense and
strictly follow the standard - because it seems that's the only way to
make a guarantee - no unexpected behavior. Also - it's the only way I
and avoid having to make "hidden" decisions inside the code which create
the kind of uncertainty which is cause of the real problems that I'm
trying to address.

Of course it's not made easier by the fact that the standard also
changes in subtle ways in what it permits.

I want to be able to write code which does what it says it does - that's
my goal. But it seems that I'm frustated by good intentions of others.
The road to hell IS paved by good intentions - at least in this case.

Robert Ramey


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
John Maddock via Boost
2017-03-12 18:23:41 UTC
Permalink
Post by Robert Ramey via Boost
There's another issue here as well. I've tried to shy a way from
making the presumption that all arithmetic is two's complement. When
I think about the cases you cite, I then have to evaluate them in
light of this view. Going down this path leads me to a chamber with
mulitple exits - all alike. When I find myself here, I've reverted to
the text of the standard - which is more restrictive. (negative
shift? for me no problem - but the standard says it's undefined. The
real question for me here is should I
a) insist that all code be standards conforming and not engage in
undefined behavior
No argument from me there, as far as I know I haven't suggested
otherwise (though I wouldn't necessarily bet against it!)

John.
Post by Robert Ramey via Boost
b) permit code which is in common usage (like shifting negative
numbers) but is undefined by the standard.
I've tended toward the former for a couple of reasons. Undefined
behavior is non-portable - even though it's likely portable in the 3
or 4 compilers which occupy the current market.
I've become aware that compiler writers are using undefined bevavior
to optimize out related code - apparently without letting use know.
If this is true, we're laying a whole new minefield.
Sooooo - I've been inclined to disallow behavior which makes sense and
strictly follow the standard - because it seems that's the only way to
make a guarantee - no unexpected behavior. Also - it's the only way I
and avoid having to make "hidden" decisions inside the code which
create the kind of uncertainty which is cause of the real problems
that I'm trying to address.
Of course it's not made easier by the fact that the standard also
changes in subtle ways in what it permits.
I want to be able to write code which does what it says it does -
that's my goal. But it seems that I'm frustated by good intentions of
others. The road to hell IS paved by good intentions - at least in
this case.
Robert Ramey
_______________________________________________
http://lists.boost.org/mailman/listinfo.cgi/boost
---
This email has been checked for viruses by AVG.
http://www.avg.com


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Robert Ramey via Boost
2017-03-12 21:05:53 UTC
Permalink
Post by John Maddock via Boost
int integer_part = my_float;
double fractional_part = my_float - integer_part;
which is of course pretty seriously unsafe :(
Casting to a safe<int> to ensure there is no overflow would simplify a
lot of error checking code.
how so?
Post by John Maddock via Boost
BTW for the record, Boost.Math uses itrunc/ltrunc/lltrunc for this
operation which throws when the float is too large for the target type.
Lots of ways to tackle this, I just wanted you to be aware that there
are genuine use cases where float-interoperability is useful and needed.
I've never doubt this. The main thing here is that I wanted to keep the
scope of the library small enough to actually do. Treatment of
safe<float> is likely a whole 'nuther thing - perhaps even more inticate
than dealing int, unsigned int. I thought integers would be simple, the
project started out with just an afternoon's work. I've long forgotten
what I wanted/needed for. It metasized to what it is today.

But as we see, I haven't been able to just ignore floating point. So
we'll have to address it at least as it interacts with integers.
Post by John Maddock via Boost
* Permissive, allow all conversions from float and truncate (as long as
the truncated value is in range).
* Strict, only allow conversion from in-range integral values. Values
with fractional parts throw.
* Never: static assert on any attempt to convert from float.
template <class I, class F>
safe<I> safe_trunc(F f);
Which throws only when the truncated value is out of range. Likewise we
would need safe_round, safe_floor and safe_ceil. Obviously a certain
amount of "bike shed colouring" may be required on those names ;)
we're experts on that subject here a bike shed central




_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
John Maddock via Boost
2017-03-13 11:27:41 UTC
Permalink
Post by Robert Ramey via Boost
Post by John Maddock via Boost
int integer_part = my_float;
double fractional_part = my_float - integer_part;
which is of course pretty seriously unsafe :(
Casting to a safe<int> to ensure there is no overflow would simplify a
lot of error checking code.
how so?
safe<int> integer_part = my_float;

and job done.

John.
Post by Robert Ramey via Boost
Post by John Maddock via Boost
BTW for the record, Boost.Math uses itrunc/ltrunc/lltrunc for this
operation which throws when the float is too large for the target type.
Lots of ways to tackle this, I just wanted you to be aware that there
are genuine use cases where float-interoperability is useful and needed.
I've never doubt this. The main thing here is that I wanted to keep
the scope of the library small enough to actually do. Treatment of
safe<float> is likely a whole 'nuther thing - perhaps even more
inticate than dealing int, unsigned int. I thought integers would be
simple, the project started out with just an afternoon's work. I've
long forgotten what I wanted/needed for. It metasized to what it is
today.
But as we see, I haven't been able to just ignore floating point. So
we'll have to address it at least as it interacts with integers.
Post by John Maddock via Boost
* Permissive, allow all conversions from float and truncate (as long as
the truncated value is in range).
* Strict, only allow conversion from in-range integral values. Values
with fractional parts throw.
* Never: static assert on any attempt to convert from float.
template <class I, class F>
safe<I> safe_trunc(F f);
Which throws only when the truncated value is out of range. Likewise we
would need safe_round, safe_floor and safe_ceil. Obviously a certain
amount of "bike shed colouring" may be required on those names ;)
we're experts on that subject here a bike shed central
_______________________________________________
http://lists.boost.org/mailman/listinfo.cgi/boost
---
This email has been checked for viruses by AVG.
http://www.avg.com


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov via Boost
2017-03-12 14:54:06 UTC
Permalink
Post by John Maddock via Boost
5) What is the purpose of class safe_literal? constepxr initialization
seems to work just fine without it?
The idea, I believe, is that if you have

safe_signed_range<0, 100> x;

and then you do

auto y = x * safe_signed_literal<2>();

you get safe_signed_range<0, 200> as the type of y. This could probably be
made less elaborate with a user-defined literal, for example

auto y = x * 2_sf;

or something like that.

Since x is not constexpr, it's not possible (I think) to achieve this result
without using a separate literal type to hold the compile-time constant 2.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Robert Ramey via Boost
2017-03-12 15:56:26 UTC
Permalink
Post by Peter Dimov via Boost
Post by John Maddock via Boost
5) What is the purpose of class safe_literal? constepxr initialization
seems to work just fine without it?
The idea, I believe, is that if you have
safe_signed_range<0, 100> x;
and then you do
auto y = x * safe_signed_literal<2>();
you get safe_signed_range<0, 200> as the type of y. This could probably
be made less elaborate with a user-defined literal, for example
auto y = x * 2_sf;
or something like that.
Since x is not constexpr, it's not possible (I think) to achieve this
result without using a separate literal type to hold the compile-time
constant 2.
here is the problem:

constexpr int i = 42; // i is constexpr and available at compile time
constexpr const safe_int x(i); // x is constexpr and available at
compile time

constexpr const safe_int y(42); // y is NOT available at compile time!!!

constexpr const safe_int z(safe_signed_literal<42>()); // z is NOW
available at compile

So the problem is that literals are not considered constexpr. I believe
this is a problem is with the way constexpr is defined.

Actually I have a whole rant on constexpr and const expressions. I
believe that C++ standard has made this a lot more complex and less
useful than it could be. But I can pursue only one hopeless quest at at
time.

Robert Ramey



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
John Maddock via Boost
2017-03-12 19:11:04 UTC
Permalink
Post by Robert Ramey via Boost
Post by Peter Dimov via Boost
Post by John Maddock via Boost
5) What is the purpose of class safe_literal? constepxr initialization
seems to work just fine without it?
The idea, I believe, is that if you have
safe_signed_range<0, 100> x;
and then you do
auto y = x * safe_signed_literal<2>();
you get safe_signed_range<0, 200> as the type of y. This could probably
be made less elaborate with a user-defined literal, for example
auto y = x * 2_sf;
or something like that.
That would be my preferred solution - to define a user-defined suffix,
so that:

4_safe

is a literal of type safe_signed_range<4,4>. No need for safe_literal
here IMO?
Post by Robert Ramey via Boost
Post by Peter Dimov via Boost
Since x is not constexpr, it's not possible (I think) to achieve this
result without using a separate literal type to hold the compile-time
constant 2.
constexpr int i = 42; // i is constexpr and available at compile time
constexpr const safe_int x(i); // x is constexpr and available at
compile time
constexpr const safe_int y(42); // y is NOT available at compile time!!!
Oh yes it is!

There is nothing at all non-constexpr about integer literals. Consider
the following toy class, that restricts values to the range [-10,10], it
performs runtime checking just like your safe class does, and still
works in constexpr:

class ten
{
private:
int m_value;
public:
template <class I>
constexpr ten(I i) : m_value(i)
{
if (i > 10 || i < -10)
throw std::overflow_error("Expected value excedes +-10");
}
constexpr ten(const ten& a) : m_value(a.m_value) {}

constexpr ten& operator += (const ten& i)
{
if ((i.m_value > 0) && (m_value > 0) && (10 - m_value < i.m_value))
throw std::overflow_error("Addition results in value > 10");
if ((i.m_value < 0) && (m_value < 0) && (10 + m_value > i.m_value))
throw std::overflow_error("Addition results in a value < 10");
return *this;
}
explicit constexpr operator int()const { return m_value; }

};

constexpr ten operator+(const ten& i, const ten& b)
{
ten result(i);
return result += b;
}

Now given:

template <int value>
struct ten_holder {};

we can write:

const constexpr ten i(5);
const constexpr ten j(5);
const constexpr ten k = j + i;
ten_holder<static_cast<int>(k)> holder;

And it all just works, of course anything that results in ten holding an
out of range error triggers a compiler error because then a code path
that can't be executed at compile time (like a throw) is triggered.

In fact safe is partly working too:

constexpr const boost::numeric::safe<int> sy(42);
ten_holder<static_cast<int>(sy)> holder3; //OK sy is available
at compile time.

It's the arithmetic operators that are broken somewhere..... ah... FOUND IT!

2 functions called "dispatch" at checked_result.hpp:119 and
exception.hpp:33 need to be marked as constexpr in order for the binary
operators to be constexpr, with that change then I can now write:

constexpr const boost::numeric::safe<int> sy(42);
ten_holder<static_cast<int>(sy)> holder3; // OK sy is
available at compile time
constexpr const boost::numeric::safe<int> syy(sy*sy);
ten_holder<static_cast<int>(syy)> holder4; // OK syy is
available at compile time

:)

BTW my experience with both constexpr and noexcept is that it's simply
not enough to test at runtime, you have to contrive compile time tests
as well, otherwise I can absolutely guarantee things will not work as
you expect - trust me I've been there, done that, got the t-shirt!

What that means in practice - and you're not going to like this - is
that every runtime test has to have a compile time / constexpr
counterpart, things that generate valid "safe" values should compile in
a constexpr context (and yes the value needs checking to, again at
compile time), while things that would throw at runtime, would be
compile-fails. It's a heck of a lot of tests - you might want to write
a short program to spew out the files (the expected compile things can
all go in one file, but expected failures each need their own test case
cpp file).

HTH, John.
Post by Robert Ramey via Boost
constexpr const safe_int z(safe_signed_literal<42>()); // z is NOW
available at compile
So the problem is that literals are not considered constexpr. I
believe this is a problem is with the way constexpr is defined.
Actually I have a whole rant on constexpr and const expressions. I
believe that C++ standard has made this a lot more complex and less
useful than it could be. But I can pursue only one hopeless quest at
at time.
Robert Ramey
_______________________________________________
http://lists.boost.org/mailman/listinfo.cgi/boost
.
---
This email has been checked for viruses by AVG.
http://www.avg.com


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
John McFarlane via Boost
2017-03-12 19:25:31 UTC
Permalink
On Sun, Mar 12, 2017 at 12:11 PM John Maddock via Boost <
Post by John Maddock via Boost
BTW my experience with both constexpr and noexcept is that it's simply
not enough to test at runtime, you have to contrive compile time tests
as well, otherwise I can absolutely guarantee things will not work as
you expect - trust me I've been there, done that, got the t-shirt!
What that means in practice - and you're not going to like this - is
that every runtime test has to have a compile time / constexpr
counterpart, things that generate valid "safe" values should compile in
a constexpr context (and yes the value needs checking to, again at
compile time), while things that would throw at runtime, would be
compile-fails. It's a heck of a lot of tests - you might want to write
a short program to spew out the files (the expected compile things can
all go in one file, but expected failures each need their own test case
cpp file).
HTH, John.
I understand that not all tests can be compile-time. But for the ones that
can, why does there need to be an equivalent run-time test? I've found that
many/most tests of literal types can be conducted at compile time. Am I not
testing my `constexpr` functions adequately?

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Robert Ramey via Boost
2017-07-09 18:44:45 UTC
Permalink
John,

I'm going through all the issues raised in the review of the safe
numerics library. I have to say I thought I was prepared but it turns
out that there have been many more problems than I realized. I actually
wonder how it got accepted. I presume its' because no one really
understands it. Anyway there are a couple of points which are
interesting to expand upon a little. I'm sending this via private email
since I don't know that it's that interesting for the list
Post by John Maddock via Boost
OK, here's my review of Robert's library.
By way of something different, I've tried to not read the docs (though
I ended up scanning them in the end) and just tried to use the
safe<int>(-1) & 1
constexpr safe_base operator~()
The static assert is triggered for *unsigned types* and not for
signed. Even if this is a compiler issue, it indicates a missing test
case or two.
The whole issue of bit wise operators has had to be rethought and
reimplemented. I think it's better now
Post by John Maddock via Boost
"conversion of integer to float loses precision" which isn't quite
what you meant to say.
* Conversion from a float holding an integer value that's within the
range of the target type to always succeed.
* Conversion from a float holding a non-integral value to
conditionally succeed (with trunction) depending on the policy in effect.
* Conversion *to* a float may also fail if the integer is outside the
range of the float (no checking may be required if the number of bits
in the integer is less than the number of bits in the float).
I'm still not decided what behavior I want. There is tons of code like
float f = 1 and int j = 1.5f which I personally don't like and would
like to inhibit. I'm still going back and forth on this.
Post by John Maddock via Boost
constexpr boost::numeric::safe<int> j = 0;
constexpr boost::numeric::safe<int> k = 3;
constexpr boost::numeric::safe<int> l = j + k;
constexpr IS fully supported.
Post by John Maddock via Boost
Similarly if I try to add a literal to a safe<int>. Again as the
tests all pass, so I'm assuming they're missing constexpr tests?
Hmmm - not understanding this.
constexpr safe<int8_t> x = 120; // includes runtime check that 120 < 127
Post by John Maddock via Boost
5) What is the purpose of class safe_literal? constepxr initialization
seems to work just fine without it?
Right.

But there are differences:
a) safe_unsigned_literal<n> is more like a safe_unsigned_range<n, n>.
It keeps the range around at compile time. So if you make an expression
like uint8_t x = 42 + safe_unsigned literal<42>
no checking code need be emitted because it can be known at compile time
that no overflow can ever occur.
but
constexpr uint8_t x = 242 + safe_unsigned literal<42>
should fail at compile time.

It's been suggested that safe_literal be replaces with just integral
constant. But

b) some traits like base_type and base_value are specialized for
safe_..._literal. Specializing these traits for integral_constants
might have some unanticipated and surprising consequences

c) making integral constants a "safe" type would also have unforseen
consequences.
Post by John Maddock via Boost
6) In the docs, the example of using error handler dispatch is
nonesense (top of page 41)?
right - fixed the whole exception functionality which solved a lot of
problems. It was ill conceived in the first place.
Post by John Maddock via Boost
7) Related to the above, I see the need to be able to define your own
primitives which behave just like yours.
The example I have is for bit-scan-left/right which has as
precondition that the argument is non-zero, but there are other
bit-fiddling operations (bit count etc) which some folks may require.
A "cookbook" example would be good.
Hmmm - are you referring to the functions in utility.hpp ? I conjured
up/stole/borrowed from multiple places which In now don't recall. I
added them "as needed" I can't see anyone needed these in order to use
the safe numerics library. But I'm not sure that's what you need.
Boost has a bunch of this stuff sprinkled all over the place. It would
be nice to consolidate this in one coherent "library". I don't know if
you're suggesting this.
Post by John Maddock via Boost
8) In the tests, you have an undocumented function "log" in the
library. Not sure if this is intended to be part of the API or not,
but IMO it's misnamed. ilog2 seems to a semi-standard name for this.
Hmmm - the boost version
file:///Users/robertramey/WorkingProjects/modular-boost/libs/integer/doc/html/boost_integer/log2.html
is called static_log2. But I'm not wedded to the name. I like my
version as it's a lot faster to compile
Post by John Maddock via Boost
9) By way of testing the libraries performance I hooked it into the
Millar-Rabin primality test in the multiprecision lib (test attached,
it counts how many probable primes there are below 30000000), here are
time = 17.6503 seconds
count = 1857858
time = 83.5055 seconds
count = 1857858
time = 36.9026 seconds
count = 1857858
time = 17.1037 seconds
count = 1857858
time = 18.4377 seconds
count = 1857858
// the above this sort of unbelievable to me. I notice that the title
of the second test doesn't include the word "safe" so maybe there is
mistake. - I'm still investigating
Post by John Maddock via Boost
time = 21.0592 seconds
count = 1857858
// this also looks a little too good to be true - it also doesn't say safe.
Post by John Maddock via Boost
I'm surprised by the difference in times between msvc and gcc, also
deeply impressed that there is effectively no performance penalty with
GCC (I was expecting there should be some!)
I agree - the whole thing is sort of suspcious. Here are my times on my
apple machine -
Mac mini (Late 2012)
2.3 GHz Intel Core i7

Testing type unsigned:
time = 16.775
count = 1857858
Testing type safe<unsigned>:
time = 36.6576
count = 1857858
Testing type 32-bit cpp_int:
time = 41.4232
count = 1857858
Program ended with exit code: 0

These time seem to fall in between yours. It's sort of amazing that the
unsigned versions all report the almost exact same time
Post by John Maddock via Boost
10) I don't find the scope of the library over-broad, in fact I
probably find it about right wrt the policies etc.
11) There were a few traits and such that it would be nice to have,
* A trait for extracting the underlying type of a safe<> - use case
for example where you enable_if a function on is_safe so don't have
the template argument list to safe<>.
This is base_type<T>::type. It's described in the SafeNumeric concept
part of the manual
Post by John Maddock via Boost
* Do we need a function for extracting the underlying type of a
safe<>? Or is casting to the underlying type optimal?
casting to the underlying type is a zero cost operation. it just
returns a reference to the value
Post by John Maddock via Boost
Actually, maybe having a function would negate the need
for the above trait, as we could just write 'auto x = mysafe.value();' ?
there also exists base_value(s) which returns the value. Also described
in SafeNumeric Concept.
Post by John Maddock via Boost
* Do we need traits equivalent to
is_signed/is_unsigned/make_signed/make_unsigned? My gut feeling is
yes we do, the issue is what to call them, and where to put them -
we're not allowed to specialize the type_traits versions (either boost
or std), but we would need a central "one true trait" version that can
be specialized by other libraries
as well (Multiprecision for example). We've backed ourselves into a
corner over this I fear!
I addressed this by using numeric_limits. In the safe library, I don't
use std::is_signed. I use std::numeric_limits<?>::is_signed. This
works for safe types because I specialize numeric_limits for all safe types.

In the "backend" which handles the primitive operations "checked_result"
(outcome if you will). I use only the std::is_signed::value since the
back end only handles the primitive operations and not the safe ones.

I'm not convinced of the utility of make_signed in this context. In
safe numerics, any time we want to change the type of something, we have
to be sure we don't change it's arithmetic value. We'll get either a
runtime or compile time error if we do. So for this
static_cast<unsigned int>(x) where x is signed will work and be checked
at compile time if possible, runtime other wise. Note that casting is
constexpr so it can be done a compile time and still work.
Post by John Maddock via Boost
12) When calling boost::integer::gcd/lcm the sub-optimal(?) non-binary
versions are called for safe<> - you might want to look into seeing
what traits
need to be defined to ensure that the same version as the underlying
type is called.
I need more help understanding this.

safe<int> will be implicitly converted to any other integer type. This
of course is blessing and curse. I've gone back and forth on this. Now
I've figured out what I want to do. I'm going to leave the implicit
conversion in, but permit the error policy to trap it a compile time.
So I'll be creating an "explicit_static_cast<T>(t)" for those what want
to trap the implicit ones while permitting the explicit ones. The
situation I'm addressing is where one unintentionally looses the "safe"
attribute when passing as a safe value to an unsafe integer type. You'll
Post by John Maddock via Boost
Those are all the issues I can see for now... in conclusion, as for
whether the library should be accepted, yes I believe it should be,
modulo the issues above.
Regards, John.
---
This email has been checked for viruses by AVG.
http://www.avg.com
_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
--
Robert Ramey
www.rrsd.com
(805)569-3793


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Loading...