improving the benchmarking API

classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|

improving the benchmarking API

Boost - uBLAS mailing list

Hi there,

I have been looking at the existing benchmarks, to see how to extend them to cover more functions as well as alternative implementations. The existing benchmarks have a few shortcomings that I would like to address:

* a single benchmark executable will measure a range of operations, and write output to stdout. It's impossible to benchmark individual operations

* operations are measured with a single set of inputs. It would be very helpful to be able to run operations on a range of inputs, to see how they perform over a variety of problem sizes.

* the generated output should be easily machine-readable, so it can be post-processed into benchmark reports (including performance charts).


(The above will be particularly useful as we are preparing PRs to include support for OpenCL backends (work that has been done by Fady Essam as a GSoC project).


I have attempted to prototype a few new benchmarks (matrix-matrix products, as well as matrix-vector products, for a variety of value-types), together with a simple script to produce graphs. For example, the attached plot was produced running:

```

 .../mm_prod -t float > mm_prod_float.txt

.../mm_prod -t double > mm_prod_double.txt

.../mm_prod -t fcomplex > mm_prod_fcomplex.txt

.../mm_prod -t dcomplex > mm_prod_dcomplex.txt

plot.py mm_prod_*.txt

```

I'd appreciate any feedback, both on the general concepts, as well as the code, which is here: https://github.com/boostorg/ublas/pull/57

Thanks,


Stefan
-- 

      ...ich hab' noch einen Koffer in Berlin...
    

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]

mm_prod.png (138K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: improving the benchmarking API

Boost - uBLAS mailing list
Hi, 

I did a similar implementation in my gsoc project covering most of the operations..

But a small diference in my implementation that running the executable of each operation produces the file containing the benchmarking data directly

Thanks,
Fady Essam

On Wed, Sep 12, 2018, 4:08 AM Stefan Seefeld via ublas <[hidden email]> wrote:

Hi there,

I have been looking at the existing benchmarks, to see how to extend them to cover more functions as well as alternative implementations. The existing benchmarks have a few shortcomings that I would like to address:

* a single benchmark executable will measure a range of operations, and write output to stdout. It's impossible to benchmark individual operations

* operations are measured with a single set of inputs. It would be very helpful to be able to run operations on a range of inputs, to see how they perform over a variety of problem sizes.

* the generated output should be easily machine-readable, so it can be post-processed into benchmark reports (including performance charts).


(The above will be particularly useful as we are preparing PRs to include support for OpenCL backends (work that has been done by Fady Essam as a GSoC project).


I have attempted to prototype a few new benchmarks (matrix-matrix products, as well as matrix-vector products, for a variety of value-types), together with a simple script to produce graphs. For example, the attached plot was produced running:

```

 .../mm_prod -t float > mm_prod_float.txt

.../mm_prod -t double > mm_prod_double.txt

.../mm_prod -t fcomplex > mm_prod_fcomplex.txt

.../mm_prod -t dcomplex > mm_prod_dcomplex.txt

plot.py mm_prod_*.txt

```

I'd appreciate any feedback, both on the general concepts, as well as the code, which is here: https://github.com/boostorg/ublas/pull/57

Thanks,



-- 

      ...ich hab' noch einen Koffer in Berlin...
    
_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]

signature.png (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: improving the benchmarking API

Boost - uBLAS mailing list

On 2018-09-12 01:06 AM, fady esam via ublas wrote:

Hi, 

I did a similar implementation in my gsoc project covering most of the operations..

But a small diference in my implementation that running the executable of each operation produces the file containing the benchmarking data directly

That's possible. But writing to a file either requires an additional command-line parameter to specify the filename, or to use a fixed filename.
My proposal simply streams to stdout, and thus lets the caller redirect a file of his choice, which offers the same functionality with a simpler interface.

Stefan
--

      ...ich hab' noch einen Koffer in Berlin...
    

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: improving the benchmarking API

Boost - uBLAS mailing list
Ok I can work on this editing 

On Thu, Sep 13, 2018, 12:29 AM stefan via ublas <[hidden email]> wrote:

On 2018-09-12 01:06 AM, fady esam via ublas wrote:

Hi, 

I did a similar implementation in my gsoc project covering most of the operations..

But a small diference in my implementation that running the executable of each operation produces the file containing the benchmarking data directly

That's possible. But writing to a file either requires an additional command-line parameter to specify the filename, or to use a fixed filename.
My proposal simply streams to stdout, and thus lets the caller redirect a file of his choice, which offers the same functionality with a simpler interface.


--

      ...ich hab' noch einen Koffer in Berlin...
    
_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]

signature.png (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: improving the benchmarking API

Boost - uBLAS mailing list
In reply to this post by Boost - uBLAS mailing list
I changed it to output to std::cout and pushed the change to my gsoc repo 

Thanks,
Fady

On Thu, Sep 13, 2018, 12:29 AM stefan via ublas <[hidden email]> wrote:

On 2018-09-12 01:06 AM, fady esam via ublas wrote:

Hi, 

I did a similar implementation in my gsoc project covering most of the operations..

But a small diference in my implementation that running the executable of each operation produces the file containing the benchmarking data directly

That's possible. But writing to a file either requires an additional command-line parameter to specify the filename, or to use a fixed filename.
My proposal simply streams to stdout, and thus lets the caller redirect a file of his choice, which offers the same functionality with a simpler interface.


--

      ...ich hab' noch einen Koffer in Berlin...
    
_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]

signature.png (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: improving the benchmarking API

Boost - uBLAS mailing list

Hi Fady,

On 2018-09-13 03:30 AM, fady esam via ublas wrote:

I changed it to output to std::cout and pushed the change to my gsoc repo

Fine. But there are also a few other additions I made, so the code in the PR has evolved quite a bit since I sent you the initial version of the benchmark API a few months ago. I would like to merge my PR (plus some follow-up work to adjust the remaining existing benchmarks to that API) before preparing your GSoC work for a merge. I'm hopeful we can get to that point within the next few weeks.

Thanks,

Stefan
--

      ...ich hab' noch einen Koffer in Berlin...
    

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: improving the benchmarking API

Boost - uBLAS mailing list
Ok fine wehenever possible you can push a PR on the "benchmarks.hpp" and I will try to edit the operations files accordingly 

On Thu, Sep 13, 2018, 3:53 PM Stefan Seefeld via ublas <[hidden email]> wrote:

Hi Fady,

On 2018-09-13 03:30 AM, fady esam via ublas wrote:

I changed it to output to std::cout and pushed the change to my gsoc repo

Fine. But there are also a few other additions I made, so the code in the PR has evolved quite a bit since I sent you the initial version of the benchmark API a few months ago. I would like to merge my PR (plus some follow-up work to adjust the remaining existing benchmarks to that API) before preparing your GSoC work for a merge. I'm hopeful we can get to that point within the next few weeks.

Thanks,


--

      ...ich hab' noch einen Koffer in Berlin...
    
_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]

signature.png (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: improving the benchmarking API

Boost - uBLAS mailing list

On 2018-09-13 09:55 AM, fady esam via ublas wrote:

Ok fine wehenever possible you can push a PR on the "benchmarks.hpp" and I will try to edit the operations files accordingly

Actually, I'd like to complete my PR (which will be merged into the upstream repo), rather than pushing changes into your repo. So once that's done, you should update your repo from that, and rebase your code.

Best,

Stefan
--

      ...ich hab' noch einen Koffer in Berlin...
    

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: improving the benchmarking API

Boost - uBLAS mailing list
Ok good.

On Thu, Sep 13, 2018, 4:00 PM Stefan Seefeld via ublas <[hidden email]> wrote:

On 2018-09-13 09:55 AM, fady esam via ublas wrote:

Ok fine wehenever possible you can push a PR on the "benchmarks.hpp" and I will try to edit the operations files accordingly

Actually, I'd like to complete my PR (which will be merged into the upstream repo), rather than pushing changes into your repo. So once that's done, you should update your repo from that, and rebase your code.

Best,


--

      ...ich hab' noch einen Koffer in Berlin...
    
_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]

signature.png (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Deciding on tensor parameters

Boost - uBLAS mailing list
The GSOC 2018 project with the title "Adding tensor support " has been succefully completed. Boost.uBlas may support tensors in future. The code, project and documentation can be found here and here.

The tensor template class is parametrized in terms of data type, storage format (first- or last-order), storage type (e.g. std::vector or std::array):

template<class T, class F=first_order, class A=std::vector<T,std::allocator<T>>>
class tensor;

An instance of a tensor template class has dynamic rank (number of dimensions) and  dimensions using a shape class that holds the data. It is a adaptor of std::vector where the rank is the size of it:

// {3,4,2} could be runtime variables of an integer type.
auto A = tensor<float>{make_shape(3,4,2)};

---------------------
---------------------

I am thinking to redesign the tensor template class where  the rank is a compile time parameter:

template<class T, std::size_t N, class F=first_order<N>, class A=std::vector<T,std::allocator<T>>>
class tensor;

An instance of a tensor template class could be generated as follows:
// {3,4,2} could be runtime variables of an integer type.
auto A = tensor<float,3>(make_shape(3,4,2));

This instantiation could be definitely improved. However, having a static rank has the following advantages and disadvantages:

-------------

Advantages:
  1. improving runtime behavior about 30% to 5 % of basic tensor operations ( depends according to my findings on the length of the inner most loop ).
  2. ability to statically distinguish between different tensor types at compile time. tensor<float,3> is a different type than tensor<float,4>. If so, why not setting matrix as an alias:

template <class type, class format, class storage>
using matrix = tensor<type,2,format,storage>.

We would only need to specify and implement one data structure ' tensor ' and if needed  provide optimized functions for matrices. This simplifies the maintenance. 
Also there might be advantages in terms of subtensor and iterator support. However implementing them will be harder. 

---------
Disadvantages:
  1. The implementations become more complicated especially for tensor multiplications and tensor reshaping.
  2. With static rank the interfaces are harder to use (setting the rank as a template parameter).
  3. The number of contracted dimensions must be known at compile time. Therefore, implementing some tensor algorithms would only be possible with template specialization instead of simple for loops. Making algorithms becomes more difficult.
Although Eigen and Boost.MultiArray decided  for compile time, it might be a critical point for uBLAS.

I am working on this right now and also I am trying to suppprt p! number of linear storage formats as a compile time parameter if p is the rank of the tensor. Actually unit-testing becomes very hard as I am not able to use fixtures so easily. Supporting static and dynamic rank will be a maintenance nightmare.

Cheers
C

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Deciding on tensor parameters

Boost - uBLAS mailing list

Hi Cem,

thanks for sending this out !


On 2018-09-13 11:34 AM, Cem Bassoy via ublas wrote:
The GSOC 2018 project with the title "Adding tensor support " has been succefully completed. Boost.uBlas may support tensors in future. The code, project and documentation can be found here and here.

The tensor template class is parametrized in terms of data type, storage format (first- or last-order), storage type (e.g. std::vector or std::array):

(Minor nit-pick: it's a class template. There is no such thing as "template classes" in C++ :-). I know the existing ublas docs are full of that spelling...)

template<class T, class F=first_order, class A=std::vector<T,std::allocator<T>>>
class tensor;

An instance of a tensor template class has dynamic rank (number of dimensions) and  dimensions using a shape class that holds the data. It is a adaptor of std::vector where the rank is the size of it:

// {3,4,2} could be runtime variables of an integer type.
auto A = tensor<float>{make_shape(3,4,2)};

---------------------
---------------------
I am thinking to redesign the tensor template class where  the rank is a compile time parameter:

template<class T, std::size_t N, class F=first_order<N>, class A=std::vector<T,std::allocator<T>>>
class tensor;

An instance of a tensor template class could be generated as follows:
// {3,4,2} could be runtime variables of an integer type.
auto A = tensor<float,3>(make_shape(3,4,2));

This instantiation could be definitely improved. However, having a static rank has the following advantages and disadvantages:

-------------

Advantages:
  1. improving runtime behavior about 30% to 5 % of basic tensor operations ( depends according to my findings on the length of the inner most loop ).
  2. ability to statically distinguish between different tensor types at compile time. tensor<float,3> is a different type than tensor<float,4>. If so, why not setting matrix as an alias:

template <class type, class format, class storage>
using matrix = tensor<type,2,format,storage>.

We would only need to specify and implement one data structure ' tensor ' and if needed  provide optimized functions for matrices. This simplifies the maintenance.

A big advantage (which has been my main motivation for pushing for this solution) is that such a scenario would be fully in line with the existing Boost.uBLAS API, so your work becomes a natural extension of what we already have.

Alternatively, if you keep the rank a runtime parameter, you are basically proposing an entirely new API, which means that Boost.uBLAS users will have to decide whether to use the old or the new API, which I'm afraid will result in a fragmentation of the community. Likewise, many existing operations only support existing vector and matrix types, so maintainers will have more work to do to support both APIs.

That, to me as library maintainer, is a very high cost, so I'm reluctant to such a change, even if the proposed API with runtime ranks is otherwise sound.

Also there might be advantages in terms of subtensor and iterator support. However implementing them will be harder. 

---------
Disadvantages:
  1. The implementations become more complicated especially for tensor multiplications and tensor reshaping.

I have worked on a BLAS library with compile-time constant ranks. And while capturing parameters such as ranks in the type system itself can indeed be a bit of a challenge, I think it's definitely doable, and may even lead to clearer code down the road.



  1. With static rank the interfaces are harder to use (setting the rank as a template parameter).

That depends on the use case. It simply means that you have to think about the rank slightly differently, while writing code.
(It could simply mean that you have to drag along an additional template parameter, if you want to write generic code. But as I mentioned above, this could arguably lead to clearer code, so I consider this a feature, not a bug. :-) )

  1. The number of contracted dimensions must be known at compile time. Therefore, implementing some tensor algorithms would only be possible with template specialization instead of simple for loops. Making algorithms becomes more difficult.

Right.

Although Eigen and Boost.MultiArray decided  for compile time, it might be a critical point for uBLAS.

I am working on this right now and also I am trying to suppprt p! number of linear storage formats as a compile time parameter if p is the rank of the tensor. Actually unit-testing becomes very hard as I am not able to use fixtures so easily. Supporting static and dynamic rank will be a maintenance nightmare.

Yeah, the parameter space to cover grows exponentially. But that is true no matter whether the rank is determined at compile-time or at runtime. The difference is only in whether you use normal functions or meta-functions to compute derived ranks, storage formats, et al.


Stefan
--

      ...ich hab' noch einen Koffer in Berlin...
    

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Deciding on tensor parameters

Boost - uBLAS mailing list


Am Do., 13. Sep. 2018 um 18:12 Uhr schrieb Stefan Seefeld via ublas <[hidden email]>:

Hi Cem,

thanks for sending this out !


On 2018-09-13 11:34 AM, Cem Bassoy via ublas wrote:
The GSOC 2018 project with the title "Adding tensor support " has been succefully completed. Boost.uBlas may support tensors in future. The code, project and documentation can be found here and here.

The tensor template class is parametrized in terms of data type, storage format (first- or last-order), storage type (e.g. std::vector or std::array):

(Minor nit-pick: it's a class template. There is no such thing as "template classes" in C++ :-). I know the existing ublas docs are full of that spelling...)


Actually there is something a template class which is said to be an instantiation of a class template, see https://isocpp.org/wiki/faq/templates.
However, I used it wrong here :-)

 
template<class T, class F=first_order, class A=std::vector<T,std::allocator<T>>>
class tensor;

An instance of a tensor template class has dynamic rank (number of dimensions) and  dimensions using a shape class that holds the data. It is a adaptor of std::vector where the rank is the size of it:

// {3,4,2} could be runtime variables of an integer type.
auto A = tensor<float>{make_shape(3,4,2)};

---------------------
---------------------
I am thinking to redesign the tensor template class where  the rank is a compile time parameter:

template<class T, std::size_t N, class F=first_order<N>, class A=std::vector<T,std::allocator<T>>>
class tensor;

An instance of a tensor template class could be generated as follows:
// {3,4,2} could be runtime variables of an integer type.
auto A = tensor<float,3>(make_shape(3,4,2));

This instantiation could be definitely improved. However, having a static rank has the following advantages and disadvantages:

-------------

Advantages:
  1. improving runtime behavior about 30% to 5 % of basic tensor operations ( depends according to my findings on the length of the inner most loop ).
  2. ability to statically distinguish between different tensor types at compile time. tensor<float,3> is a different type than tensor<float,4>. If so, why not setting matrix as an alias:

template <class type, class format, class storage>
using matrix = tensor<type,2,format,storage>.

We would only need to specify and implement one data structure ' tensor ' and if needed  provide optimized functions for matrices. This simplifies the maintenance.

A big advantage (which has been my main motivation for pushing for this solution) is that such a scenario would be fully in line with the existing Boost.uBLAS API, so your work becomes a natural extension of what we already have.

I think, just the contrary is the case. There would be no extension to the old dense matrix class template, as the new alias template would replace the old one because we cannot have the identifier in the same namespace. In the above case, we need to port all vector and matrix functions for the new tensor type. The vector and matrix class templates are not alias templates but are distinct class templates. If I am not mistaken, adding the tensor as a class template as it is right now would be the uBLAS way.

 

Alternatively, if you keep the rank a runtime parameter, you are basically proposing an entirely new API, which means that Boost.uBLAS users will have to decide whether to use the old or the new API, which I'm afraid will result in a fragmentation of the community. Likewise, many existing operations only support existing vector and matrix types, so maintainers will have more work to do to support both APIs.

That, to me as library maintainer, is a very high cost, so I'm reluctant to such a change, even if the proposed API with runtime ranks is otherwise sound.

Yes agree with you on that point.

 

Also there might be advantages in terms of subtensor and iterator support. However implementing them will be harder. 

---------
Disadvantages:
  1. The implementations become more complicated especially for tensor multiplications and tensor reshaping.

I have worked on a BLAS library with compile-time constant ranks. And while capturing parameters such as ranks in the type system itself can indeed be a bit of a challenge, I think it's definitely doable, and may even lead to clearer code down the road.

Yes I agree.
 


  1. With static rank the interfaces are harder to use (setting the rank as a template parameter).

That depends on the use case. It simply means that you have to think about the rank slightly differently, while writing code.
(It could simply mean that you have to drag along an additional template parameter, if you want to write generic code. But as I mentioned above, this could arguably lead to clearer code, so I consider this a feature, not a bug. :-) )

Yes I also do not consider it as a bug :-). For us as library designers, C++ 'experts', this might be nice feature.
But 'normal' users, not library designers, especially those who are used to matlab, python, octave, scilab etc. are not used to not worry about template parameters. There still might be a way to elegantly instanstiate tensors. However, programming and implementing tensor algorithms becomes definitely harder with template specialization or with if constexpr.

 

  1. The number of contracted dimensions must be known at compile time. Therefore, implementing some tensor algorithms would only be possible with template specialization instead of simple for loops. Making algorithms becomes more difficult.

Right.

Although Eigen and Boost.MultiArray decided  for compile time, it might be a critical point for uBLAS.

I am working on this right now and also I am trying to suppprt p! number of linear storage formats as a compile time parameter if p is the rank of the tensor. Actually unit-testing becomes very hard as I am not able to use fixtures so easily. Supporting static and dynamic rank will be a maintenance nightmare.

Yeah, the parameter space to cover grows exponentially. But that is true no matter whether the rank is determined at compile-time or at runtime. The difference is only in whether you use normal functions or meta-functions to compute derived ranks, storage formats, et al.

Well yes, I experienced difficulty not only in covering the parameter space but also in setting up the unit tests with fixtures when using template functions. Well but that might not be the main issue here.
 
Cheers
C

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Deciding on tensor parameters

Boost - uBLAS mailing list

On 2018-09-13 02:06 PM, Cem Bassoy via ublas wrote:



Am Do., 13. Sep. 2018 um 18:12 Uhr schrieb Stefan Seefeld via ublas <[hidden email]>:

On 2018-09-13 11:34 AM, Cem Bassoy via ublas wrote:

We would only need to specify and implement one data structure ' tensor ' and if needed  provide optimized functions for matrices. This simplifies the maintenance.

A big advantage (which has been my main motivation for pushing for this solution) is that such a scenario would be fully in line with the existing Boost.uBLAS API, so your work becomes a natural extension of what we already have.

I think, just the contrary is the case. There would be no extension to the old dense matrix class template, as the new alias template would replace the old one because we cannot have the identifier in the same namespace. In the above case, we need to port all vector and matrix functions for the new tensor type. The vector and matrix class templates are not alias templates but are distinct class templates. If I am not mistaken, adding the tensor as a class template as it is right now would be the uBLAS way.

I think I may have expressed myself poorly. Yes, I agree: with your approach you would introduce new "matrix" and "vector" types (as template aliases). It is my hope however that we could use those as drop-in replacements for the old matrix and vector classes, i.e. I would like to simply replace those (assuming of course that they are sufficiently API-compatible to make this possible). In that case, no other code (such as stand-alone functions / operators taking vector and matrix arguments) would need to change.

Of course, if we need to port code over, it's a sign that the old and new types aren't API-compatible, so this becomes a bigger question (as it also affects users). Again, my assumption was that we could come up with a new API that was backward-compatible.

 

Alternatively, if you keep the rank a runtime parameter, you are basically proposing an entirely new API, which means that Boost.uBLAS users will have to decide whether to use the old or the new API, which I'm afraid will result in a fragmentation of the community. Likewise, many existing operations only support existing vector and matrix types, so maintainers will have more work to do to support both APIs.

That, to me as library maintainer, is a very high cost, so I'm reluctant to such a change, even if the proposed API with runtime ranks is otherwise sound.

Yes agree with you on that point.

Glad to hear that  ! :-)

[...]

Thanks,

Stefan
--

      ...ich hab' noch einen Koffer in Berlin...
    

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: Deciding on tensor parameters

Boost - uBLAS mailing list


Am Do., 13. Sep. 2018 um 20:35 Uhr schrieb Stefan Seefeld via ublas <[hidden email]>:

On 2018-09-13 02:06 PM, Cem Bassoy via ublas wrote:



Am Do., 13. Sep. 2018 um 18:12 Uhr schrieb Stefan Seefeld via ublas <[hidden email]>:

On 2018-09-13 11:34 AM, Cem Bassoy via ublas wrote:

We would only need to specify and implement one data structure ' tensor ' and if needed  provide optimized functions for matrices. This simplifies the maintenance.

A big advantage (which has been my main motivation for pushing for this solution) is that such a scenario would be fully in line with the existing Boost.uBLAS API, so your work becomes a natural extension of what we already have.

I think, just the contrary is the case. There would be no extension to the old dense matrix class template, as the new alias template would replace the old one because we cannot have the identifier in the same namespace. In the above case, we need to port all vector and matrix functions for the new tensor type. The vector and matrix class templates are not alias templates but are distinct class templates. If I am not mistaken, adding the tensor as a class template as it is right now would be the uBLAS way.

I think I may have expressed myself poorly. Yes, I agree: with your approach you would introduce new "matrix" and "vector" types (as template aliases). It is my hope however that we could use those as drop-in replacements for the old matrix and vector classes, i.e. I would like to simply replace those (assuming of course that they are sufficiently API-compatible to make this possible). In that case, no other code (such as stand-alone functions / operators taking vector and matrix arguments) would need to change.

Yes. I think it would be mostly adjusting the free functions the alias template.
 

Of course, if we need to port code over, it's a sign that the old and new types aren't API-compatible, so this becomes a bigger question (as it also affects users). Again, my assumption was that we could come up with a new API that was backward-compatible.

Hmmm, backward compatibility could be a bit more difficult in this case. There are so many iterators inside those classes. We do not need them. At least only, not on this level I think. So if we agree on tensor class template with a static rank using alias templates for matrix and vector, means that we would provide a new api with the same functionality but better usability?

 

 

Alternatively, if you keep the rank a runtime parameter, you are basically proposing an entirely new API, which means that Boost.uBLAS users will have to decide whether to use the old or the new API, which I'm afraid will result in a fragmentation of the community. Likewise, many existing operations only support existing vector and matrix types, so maintainers will have more work to do to support both APIs.

That, to me as library maintainer, is a very high cost, so I'm reluctant to such a change, even if the proposed API with runtime ranks is otherwise sound.

Yes agree with you on that point.

Glad to hear that  ! :-)

So I will wait for more opinions before continuing to adjust the tensor class template.

Cheers
C

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]

signature.png (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Deciding on tensor parameters

Boost - uBLAS mailing list

On 2018-09-13 03:44 PM, Cem Bassoy via ublas wrote:



Am Do., 13. Sep. 2018 um 20:35 Uhr schrieb Stefan Seefeld via ublas <[hidden email]>:


Of course, if we need to port code over, it's a sign that the old and new types aren't API-compatible, so this becomes a bigger question (as it also affects users). Again, my assumption was that we could come up with a new API that was backward-compatible.

Hmmm, backward compatibility could be a bit more difficult in this case. There are so many iterators inside those classes. We do not need them. At least only, not on this level I think. So if we agree on tensor class template with a static rank using alias templates for matrix and vector, means that we would provide a new api with the same functionality but better usability?

Yeah. 
 

Alternatively, if you keep the rank a runtime parameter, you are basically proposing an entirely new API, which means that Boost.uBLAS users will have to decide whether to use the old or the new API, which I'm afraid will result in a fragmentation of the community. Likewise, many existing operations only support existing vector and matrix types, so maintainers will have more work to do to support both APIs.

That, to me as library maintainer, is a very high cost, so I'm reluctant to such a change, even if the proposed API with runtime ranks is otherwise sound.

Yes agree with you on that point.

Glad to hear that  ! :-)

So I will wait for more opinions before continuing to adjust the tensor class template.

OK. Not sure how many people pay attention to this discussion, though. If you don't hear anything within a few days (a week at most, I'd say), I'd just move forward.


Stefan
--

      ...ich hab' noch einen Koffer in Berlin...
    

_______________________________________________
ublas mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: [hidden email]