How multiprecision optimizes allocation?

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

How multiprecision optimizes allocation?

Boost - Users mailing list
I have tested boost::mulitprecision::cpp_int
and have tested simplest algorithm for int multiplication
My test method is several times slower than boost multiplication because I resized product vector to size fist + size second. This allocation take most time, if I use static array on stack, it will be fast for small numbers. What is the riddle of fast comnputing z = x*y in loop for quite small x and y in Boost cpp_int?



_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: How multiprecision optimizes allocation?

Boost - Users mailing list

On 05/11/2019 06:12, Andy via Boost-users wrote:
> I have tested boost::mulitprecision::cpp_int
> and have tested simplest algorithm for int multiplication
> My test method is several times slower than boost multiplication
> because I resized product vector to size fist + size second. This
> allocation take most time, if I use static array on stack, it will be
> fast for small numbers. What is the riddle of fast comnputing z = x*y
> in loop for quite small x and y in Boost cpp_int?

There's no allocation for small numbers.

John.


--
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: How multiprecision optimizes allocation?

Boost - Users mailing list
sizeof(boost::multiprecision::cpp_int) is only 24 bytes = 6*32 bit limb
This means, that upto 24 bytes is stored at stack on variable itself, and bigger are allocated?


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: How multiprecision optimizes allocation?

Boost - Users mailing list

On 05/11/2019 09:12, Andy via Boost-users wrote:
> sizeof(boost::multiprecision::cpp_int) is only 24 bytes = 6*32 bit limb
> This means, that upto 24 bytes is stored at stack on variable itself,
> and bigger are allocated?

There are other member variables making up that size - the default (as
used by type cpp_int) is never less than 2 whole limbs (so 128 bits when
__int128 is available), otherwise however many limbs will fit inside
sizeof(unsigned)+sizeof(void*), which is 4 32-bit limbs on MSVC.



--
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users