multi-dimensional arrays?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

multi-dimensional arrays?

Neal Becker
Does glas intend to address multi-dimensional arrays?

There seems to be a big disconnect between c++ libraries that support 1d and
2d arrays with various linear algebra support and libraries that support
larger dimensionality (usually without any connection to linear algebra).  
IMO, this is really unfortunate.  I think we really need either one library
that supports both multidimensional arrays and linear algebra or 2 libraries
that interoperate well.
_______________________________________________
glas mailing list
[hidden email]
http://lists.boost.org/mailman/listinfo.cgi/glas
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: multi-dimensional arrays?

Ian McCulloch

On Thu, 14 Dec 2006, Neal Becker wrote:

> Does glas intend to address multi-dimensional arrays?
>
> There seems to be a big disconnect between c++ libraries that support 1d and
> 2d arrays with various linear algebra support and libraries that support
> larger dimensionality (usually without any connection to linear algebra).  
> IMO, this is really unfortunate.  I think we really need either one library
> that supports both multidimensional arrays and linear algebra or 2 libraries
> that interoperate well.

The problem, as I see it, is that higher dimensional objects don't allow
for the nice notation that vectors and matrices do.  Most tensor libraries
I've seen use some sort of index notation, say
a[i][j][k] = b[i][j] * c[k] + d[i][m] * e[k][l][m]
But for ordinary matrices, this is quite cumbersome.  Is there a nice
notation that unifies vectors/matrices with higher dimensional tensors?

Hopefully it would not be too hard to have a tensor library where the
special cases of 1- and 2-dimensional tensors satisfy the glas vector and
matrix concepts though.

Nested vectors and matrices are a different case, these should be
supported.  Much of it comes naturally, unless you go out of the way to
forbid constructing a matrix<matrix<T> >.  But there does need to be some
extra functionality, for example partial transpose
A(i,j)(k,l) -> B(i,k)(j,l).  I don't think this affects the core library
much (although I may be wrong) - as long as one keeps in mind that the
nested operation might need to be specified eg, for a vector of vectors,
you might want to construct the vector of 2-norms of the nested vectors,
then construct the 1-norm of that vector, as one operation, say as a
lambda expression norm_1(v, norm_2(_1)).  Or maybe
norm_1(map(v, norm_2(_1))).

Cheers,
Ian
_______________________________________________
glas mailing list
[hidden email]
http://lists.boost.org/mailman/listinfo.cgi/glas
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: multi-dimensional arrays?

plagne.laurent
Hi all,

I had to developp my own lib since I could not find any libs dealing with
multi-dim vectors in the sense of Vector< Vector<..> >. Those vectors are use
with corresponding Matrix < Matrix< .. > >. (block matrices).

Such situations arise naturally when the underlying differential equations
evolves in a space that is a tensorial product of sub-spaces. For example, de
Boltzmann equation space is the phase space which can be seen as a tensorial
product of real space and momemtum space. Another example is a Cartesian 2D
space which can be seen as X \otimes Y.

In this cases, there is no problem with natural notations since a matrix vector
product A*X will leads to A(i,j)*X[j] which are again matrix vector products.

I know that the GLAS project is still in a design stage but if some of you are
interested : I made some interesting (at least for me) experiments on the
subject of Expression Template efficiency in the case of large (out of cache)
vectors (mono and multi-dim). In short my ET mecanism is specialized to use
ATLAS in the case of 1D vectors. The result is that I can type X=a*Y+b*Z (in
the multi-dim case) and I obtain results which are better that the
corresponding low level loop coding (by +/- 40 %). I also compared these
results with blitz and uBlas which (only) provides result as good as the low
level loop codes.

Regards

Laurent
_______________________________________________
glas mailing list
[hidden email]
http://lists.boost.org/mailman/listinfo.cgi/glas
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: multi-dimensional arrays?

Karl Meerbergen-2
In reply to this post by Ian McCulloch
Ian McCulloch wrote:

>Hopefully it would not be too hard to have a tensor library where the
>special cases of 1- and 2-dimensional tensors satisfy the glas vector and
>matrix concepts though.
>
>Nested vectors and matrices are a different case, these should be
>supported.  Much of it comes naturally, unless you go out of the way to
>forbid constructing a matrix<matrix<T> >.  But there does need to be some
>extra functionality, for example partial transpose
>A(i,j)(k,l) -> B(i,k)(j,l).  I don't think this affects the core library
>much (although I may be wrong)
>  
>
I think this is an important point. Toon and I have thought about this
for a long time. We will certainly talk about this in detail. Providing
the container matrix< matrix<T> > is not hard. I recall that algorithms
are application/interpretation dependent, and we had a lively discussion
on the list on this topic. So, I agree that 'some' support is needed,
although it is not yet clear to me what 'some' here means.

Best,


Karl

Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

_______________________________________________
glas mailing list
[hidden email]
http://lists.boost.org/mailman/listinfo.cgi/glas

Karl.Meerbergen.vcf (344 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: multi-dimensional arrays?

plagne.laurent
Hi Karl,

IMHO, the matrix< matrix<..> > can be more or less difficult to implement
depending on the constraints you accept. As an example, if you decide that the
library must remain open to third party codes and handle different kind of
matrix data storage : this is pretty easy to do with simple matrix type
(matrix<double>) because the return type of A(i,j) will be a simple ref to
double (or const ref). In the 2D case, matrix< matrix<double> > A, the accessor
operator A(i,j) will be naturally a ref to matrix<double>. And you loose the
compatibility with third party data elements...
One can avoid this problem but it is not a trivial issue.

Another point in nested matrix issue is that it can help (again IMHO) to
validate the very impressive work the GLAS team has done with vector space
concepts (HI Peter). For example, the 0 and 1 matrix counter-part are to be
efficiently treated.

Last point is maybe not a so general issue : In my case, I have to handle sparse
matrix with a complicated but highly structured sparsity pattern. This pattern
can be express with a reduced set of simple sparsity patterns : e.g A
=Dense<Sparse<Diagonal..> > > + Sparse<Tridiagonal<Dense<..> > >


Best

Laurent


> I think this is an important point. Toon and I have thought about this
> for a long time. We will certainly talk about this in detail. Providing
> the container matrix< matrix<T> > is not hard. I recall that algorithms
> are application/interpretation dependent, and we had a lively discussion
> on the list on this topic. So, I agree that 'some' support is needed,
> although it is not yet clear to me what 'some' here means.
>
> Best,



_______________________________________________
glas mailing list
[hidden email]
http://lists.boost.org/mailman/listinfo.cgi/glas
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: multi-dimensional arrays?

Peter Gottschling
Hi Laurent,

Concerning nested matrix and vector types, we need to ask ourselves
first if we need special algorithms to handle it or if a least in some
cases the operators on the values already use the right algorithm.

Second, is the use of operators efficient enough performance-wise?  In
a matrix multiplication c(i, k)+= a(i, j) * b(j, k) is correct for any
type with consistent operators but not very efficient if a, b, and c
are matrices. Unless we have really sophisticated expression templates.

Next question, do we need special concepts?  Say e.g.
RecursiveTransposable for a recursive function transpose

How would it look like?

concept RecursiveTransposable<typename Matrix>
{
    where RecursiveTransposable<Matrix::value_type>;
    // ...
};

would recursively descent and break if it reaches a scalar value that
has no value_type.

Conversely, something like:

auto // ???
concept RecursiveTransposable<typename Matrix>
{
     Matrix transpose(Matrix );
}

Now we say this is defined for arithmetic scalars

template<typename T>
     where Arithmetic<T>
concept_map RecursiveTransposable<T> {}

And now enable it from bottom up:

template<typename T>
     where RecursiveTransposable<T>
concept_map RecursiveTransposable<matrix<T> > {}


Extending this to block-matrices,
(A B)
(C D)
where A, B, C, and D are matrices of different type. Say we have a type
for heterogeneous blocks

template <typename MA, typename MB, typename MC, typename MD>
hb_matrix { ... };

We can define:
template <typename MA, typename MB, typename MC, typename MD>
     where RecursiveTransposable<MA>
         &&  RecursiveTransposable<MB>
         &&  RecursiveTransposable<MC>
         &&  RecursiveTransposable<MD>
concept_map RecursiveTransposable<hb_matrix<MA, MB, MC, MD> > {}

However, this are only some preliminary thoughts.  We need to look,
which algorithms we want to implement recursively.  And it looks like  
that there can be a great difference between a syntactically correct
implementation and a high-performance implementation.

Cheers,
Peter

On 15.12.2006, at 05:49, [hidden email] wrote:

> Hi Karl,
>
> IMHO, the matrix< matrix<..> > can be more or less difficult to
> implement
> depending on the constraints you accept. As an example, if you decide
> that the
> library must remain open to third party codes and handle different
> kind of
> matrix data storage : this is pretty easy to do with simple matrix type
> (matrix<double>) because the return type of A(i,j) will be a simple
> ref to
> double (or const ref). In the 2D case, matrix< matrix<double> > A, the
> accessor
> operator A(i,j) will be naturally a ref to matrix<double>. And you
> loose the
> compatibility with third party data elements...
> One can avoid this problem but it is not a trivial issue.
>
> Another point in nested matrix issue is that it can help (again IMHO)
> to
> validate the very impressive work the GLAS team has done with vector
> space
> concepts (HI Peter). For example, the 0 and 1 matrix counter-part are
> to be
> efficiently treated.
>
> Last point is maybe not a so general issue : In my case, I have to
> handle sparse
> matrix with a complicated but highly structured sparsity pattern. This
> pattern
> can be express with a reduced set of simple sparsity patterns : e.g A
> =Dense<Sparse<Diagonal..> > > + Sparse<Tridiagonal<Dense<..> > >
>
>
> Best
>
> Laurent
>
>
>> I think this is an important point. Toon and I have thought about this
>> for a long time. We will certainly talk about this in detail.
>> Providing
>> the container matrix< matrix<T> > is not hard. I recall that
>> algorithms
>> are application/interpretation dependent, and we had a lively
>> discussion
>> on the list on this topic. So, I agree that 'some' support is needed,
>> although it is not yet clear to me what 'some' here means.
>>
>> Best,
>
>
>
> _______________________________________________
> glas mailing list
> [hidden email]
> http://lists.boost.org/mailman/listinfo.cgi/glas
>
------------
Peter Gottschling, Ph.D.
Research Associate
Open Systems Laboratory
Indiana University
135 Lindley Hall
Bloomington, IN 47405
Tel.: +1-812-855-3608   Fax: +1-812-856-0853
http://www.osl.iu.edu/~pgottsch

_______________________________________________
glas mailing list
[hidden email]
http://lists.boost.org/mailman/listinfo.cgi/glas
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: multi-dimensional arrays?

Karl Meerbergen-2
Hi,


If I understand well, the reason for having nested matrices is to
organise blocking in matrices and vectors for exploiting memory hierarchy.

1) Should the user see this structure? Does the user access the data via
the nesting or can he address directly? For example, in BLAS and LAPACK
implementations, this is fully transparant.

2) Former discussions on the mailing list have shown different
interpretations of nested vectors and matrices.

For example, for some, vector< vector<T> > was interpreted as a vector
organized in blocks, for others it was a matrix. Therefore, I would
prefer dedicated names such as blocked_vector<T> and matrix_as_vector<T>
or something like that.

The internals in the matvec prod and matmat prod should use some form of
blocking, but should this be visible inside the vector and matrix classes?


Karl



Peter Gottschling wrote:

>Hi Laurent,
>
>Concerning nested matrix and vector types, we need to ask ourselves
>first if we need special algorithms to handle it or if a least in some
>cases the operators on the values already use the right algorithm.
>
>Second, is the use of operators efficient enough performance-wise?  In
>a matrix multiplication c(i, k)+= a(i, j) * b(j, k) is correct for any
>type with consistent operators but not very efficient if a, b, and c
>are matrices. Unless we have really sophisticated expression templates.
>
>Next question, do we need special concepts?  Say e.g.
>RecursiveTransposable for a recursive function transpose
>
>How would it look like?
>
>concept RecursiveTransposable<typename Matrix>
>{
>    where RecursiveTransposable<Matrix::value_type>;
>    // ...
>};
>
>would recursively descent and break if it reaches a scalar value that
>has no value_type.
>
>Conversely, something like:
>
>auto // ???
>concept RecursiveTransposable<typename Matrix>
>{
>     Matrix transpose(Matrix );
>}
>
>Now we say this is defined for arithmetic scalars
>
>template<typename T>
>     where Arithmetic<T>
>concept_map RecursiveTransposable<T> {}
>
>And now enable it from bottom up:
>
>template<typename T>
>     where RecursiveTransposable<T>
>concept_map RecursiveTransposable<matrix<T> > {}
>
>
>Extending this to block-matrices,
>(A B)
>(C D)
>where A, B, C, and D are matrices of different type. Say we have a type
>for heterogeneous blocks
>
>template <typename MA, typename MB, typename MC, typename MD>
>hb_matrix { ... };
>
>We can define:
>template <typename MA, typename MB, typename MC, typename MD>
>     where RecursiveTransposable<MA>
>         &&  RecursiveTransposable<MB>
>         &&  RecursiveTransposable<MC>
>         &&  RecursiveTransposable<MD>
>concept_map RecursiveTransposable<hb_matrix<MA, MB, MC, MD> > {}
>
>However, this are only some preliminary thoughts.  We need to look,
>which algorithms we want to implement recursively.  And it looks like  
>that there can be a great difference between a syntactically correct
>implementation and a high-performance implementation.
>
>Cheers,
>Peter
>
>On 15.12.2006, at 05:49, [hidden email] wrote:
>
>  
>
>>Hi Karl,
>>
>>IMHO, the matrix< matrix<..> > can be more or less difficult to
>>implement
>>depending on the constraints you accept. As an example, if you decide
>>that the
>>library must remain open to third party codes and handle different
>>kind of
>>matrix data storage : this is pretty easy to do with simple matrix type
>>(matrix<double>) because the return type of A(i,j) will be a simple
>>ref to
>>double (or const ref). In the 2D case, matrix< matrix<double> > A, the
>>accessor
>>operator A(i,j) will be naturally a ref to matrix<double>. And you
>>loose the
>>compatibility with third party data elements...
>>One can avoid this problem but it is not a trivial issue.
>>
>>Another point in nested matrix issue is that it can help (again IMHO)
>>to
>>validate the very impressive work the GLAS team has done with vector
>>space
>>concepts (HI Peter). For example, the 0 and 1 matrix counter-part are
>>to be
>>efficiently treated.
>>
>>Last point is maybe not a so general issue : In my case, I have to
>>handle sparse
>>matrix with a complicated but highly structured sparsity pattern. This
>>pattern
>>can be express with a reduced set of simple sparsity patterns : e.g A
>>=Dense<Sparse<Diagonal..> > > + Sparse<Tridiagonal<Dense<..> > >
>>
>>
>>Best
>>
>>Laurent
>>
>>
>>    
>>
>>>I think this is an important point. Toon and I have thought about this
>>>for a long time. We will certainly talk about this in detail.
>>>Providing
>>>the container matrix< matrix<T> > is not hard. I recall that
>>>algorithms
>>>are application/interpretation dependent, and we had a lively
>>>discussion
>>>on the list on this topic. So, I agree that 'some' support is needed,
>>>although it is not yet clear to me what 'some' here means.
>>>
>>>Best,
>>>      
>>>
>>
>>_______________________________________________
>>glas mailing list
>>[hidden email]
>>http://lists.boost.org/mailman/listinfo.cgi/glas
>>
>>    
>>
>------------
>Peter Gottschling, Ph.D.
>Research Associate
>Open Systems Laboratory
>Indiana University
>135 Lindley Hall
>Bloomington, IN 47405
>Tel.: +1-812-855-3608   Fax: +1-812-856-0853
>http://www.osl.iu.edu/~pgottsch
>
>_______________________________________________
>glas mailing list
>[hidden email]
>http://lists.boost.org/mailman/listinfo.cgi/glas
>  
>

Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

_______________________________________________
glas mailing list
[hidden email]
http://lists.boost.org/mailman/listinfo.cgi/glas

Karl.Meerbergen.vcf (344 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: multi-dimensional arrays?

plagne.laurent
In reply to this post by Peter Gottschling
Hi Peter, Hi Karl, sorry for the delay.

Since my own lib (with a new name LEGOLAS++) handles nested matrices and vectors
I can reply when I had to address the issues you have mentioned. Up o now, I did
not address the difficult and important matrix<->matrix (BLAS3) issue in the
general case so I do not pretend to hold all the answers...

>>Concerning nested matrix and vector types, we need to ask ourselves
>>first if we need special algorithms to handle it or if a least in some
>>cases the operators on the values already use the right algorithm.

All the matrices hold a specific sparsity pattern (Dense, Sparse, Banded,
Tridiag..) which defines a corresponding default matrix-vector algorithm. As an
example, A Dense matrix will rely (by default) on a basic matrix-vector algo
which is modified to take in account the assign mode (Y+=A*X, or Y=A*X,..) :

struct DenseMatrixVectorProduct{
  template<class ASSIGN_MODE>
  struct Engine{

    template <class MATRIX,class VECTOR, class VECTOR_INOUT>
    static inline void apply(const MATRIX & A ,
                             const VECTOR & X ,
                             VECTOR_INOUT & Y)
    {
        ASSIGN_MODE::initialize(Y);
        const int NR=matrix.nrows();
        const int NC=matrix.ncols();

        for (int i=0; i< NR ; i++){
          for (int j=0; j< NC ; j++){

            ASSIGN_MODE::accumulate(Y[i],A(i,j)*X[j]);

          }
        }
      }
      return ;
    }
  };

In the case of Y=A*X :
       ASSIGN_MODE::initialize(Y)<=> Y=0.0
       ASSIGN_MODE::accumulate(Y[i],A(i,j)*X[j])<=>Y[i]+=A(i,j)*X[j]
In the case of Y+=A*X :
       ASSIGN_MODE::initialize(Y)<=> DoNothing
       ASSIGN_MODE::accumulate(Y[i],A(i,j)*X[j])<=>Y[i]+=A(i,j)*X[j]
In the case of Y-=A*X :
       ASSIGN_MODE::initialize(Y)<=> DoNothing
       ASSIGN_MODE::accumulate(Y[i],A(i,j)*X[j])<=>Y[i]-=A(i,j)*X[j]

If A(i,j) is a submatrix, it will invoke automatically a sub-algorithm
linked with its sparsity pattern.

In addition, the user can provide its own algorithm at every level. For example,
if the matrix is Dense AND contains double, float or complex elements AND is
stored AND is stored via compatible 1D (RowMajor or ColMajor) vector : then a
xgemm algo can be used. Up to now, I did not use Traits to link the better
possible algorithm available corresponding to a given matrix type. The
important issue was for me that the user deal only (in a first approach) with
the sparsity patterns and still be able to tune the code (via typedef) at a
later time. My main concern was that this choice remains always possible and
that is why the user can also provides its own container usually prescribed by
a third part lib (e.g. ATLAS).

Since the vector case is much simpler in my case (only dense vector) a Traits
class
is used to automatically invoke the best lib available for a given vectorial
operation
(X+=aY, dot, X=0,..) in the 1D case.

For the transpose issue, I have tried different strategies but I doubt that I
found a perfect one :
every MVP algorithms come with a pair Transpose class method which is called
recursively
via a transpose function. For example :

Y=transpose(A)*X lead to use

struct DenseMatrixVectorProduct{
....
    struct Transpose{
      template<class ASSIGN_MODE>
      struct Engine{

         template <class MATRIX,class VECTOR, class VECTOR_INOUT>
         static inline void apply(const MATRIX & A ,
                          const VECTOR & X ,
                             VECTOR_INOUT & Y)
        {
           ASSIGN_MODE::initialize(Y);
           const int NR=matrix.nrows();
           const int NC=matrix.ncols();

            for (int i=0; i< NR ; i++){
              for (int j=0; j< NC ; j++){
                ASSIGN_MODE::accumulate(Y[j],transpose(A(i,j))*X[i]);
              }
            }
        }
        return ;
      }
    };

I choose this pretty heavy method because it lead to access the matrix elements
in the same order than the direct product. On can think that the traversal
order is closely linked with the  matrix storage which is not modified.


>Next question, do we need special concepts?  Say e.g.
>RecursiveTransposable for a recursive function transpose

To me, every matrix can be transposed... So no additional concept

>would recursively descent and break if it reaches a scalar value that
>has no value_type.

In LEGOLAS++, every matrix elements type (especially scalars) comes with a
companion ElementDataDriver class which is used to ensure that every elements
will model the ElementMatrix Concept. This concept includes the definition of
transpose function, dot product, zero,...

Again Peter, I think that the matrix-matrix issue addressed by GLAS is much more
difficult.

>> Extending this to block-matrices,

Strictly speaking I do not deal with block-matrices but only with Matrix with a
ElementMatrix concept which is modeled by matrices.
The same thing for vectors.

Hi Karl :

I do not deal with blocked algorithm used in optimized xgemm implementation for
Dense matrices. While this issue can be seen as an implementation policy, the
sparsity pattern of a matrix (chosen be the user) is NOT an implementation
detail which should be abstracted away. On the contrary I try to design my lib
to allow a very detailed description of this sparsity
pattern.

To me, a matrix and a 2D vectors are very different types. Matrix are mainly
linear operator acting on Vectors.
The container of a dense matrix can eventually be a 2D array (vector) but the
semantic level of a matrix and the corresponding container should be strictly
separated since a matrix does not imply a container. Permutation matrices,
finite different stencils etc.. are matrix and usually are not stored at all...

Best,

Laurent

Quoting Peter Gottschling <[hidden email]>:

> Hi Laurent,
>
> Concerning nested matrix and vector types, we need to ask ourselves
> first if we need special algorithms to handle it or if a least in some
> cases the operators on the values already use the right algorithm.
>
> Second, is the use of operators efficient enough performance-wise?  In
> a matrix multiplication c(i, k)+= a(i, j) * b(j, k) is correct for any
> type with consistent operators but not very efficient if a, b, and c
> are matrices. Unless we have really sophisticated expression templates.
>
> Next question, do we need special concepts?  Say e.g.
> RecursiveTransposable for a recursive function transpose
>
> How would it look like?
>
> concept RecursiveTransposable<typename Matrix>
> {
>     where RecursiveTransposable<Matrix::value_type>;
>     // ...
> };
>
> would recursively descent and break if it reaches a scalar value that
> has no value_type.
>
> Conversely, something like:
>
> auto // ???
> concept RecursiveTransposable<typename Matrix>
> {
>      Matrix transpose(Matrix );
> }
>
> Now we say this is defined for arithmetic scalars
>
> template<typename T>
>      where Arithmetic<T>
> concept_map RecursiveTransposable<T> {}
>
> And now enable it from bottom up:
>
> template<typename T>
>      where RecursiveTransposable<T>
> concept_map RecursiveTransposable<matrix<T> > {}
>
>
> Extending this to block-matrices,
> (A B)
> (C D)
> where A, B, C, and D are matrices of different type. Say we have a type
> for heterogeneous blocks
>
> template <typename MA, typename MB, typename MC, typename MD>
> hb_matrix { ... };
>
> We can define:
> template <typename MA, typename MB, typename MC, typename MD>
>      where RecursiveTransposable<MA>
>          &&  RecursiveTransposable<MB>
>          &&  RecursiveTransposable<MC>
>          &&  RecursiveTransposable<MD>
> concept_map RecursiveTransposable<hb_matrix<MA, MB, MC, MD> > {}
>
> However, this are only some preliminary thoughts.  We need to look,
> which algorithms we want to implement recursively.  And it looks like
> that there can be a great difference between a syntactically correct
> implementation and a high-performance implementation.
>
> Cheers,
> Peter
>
> On 15.12.2006, at 05:49, [hidden email] wrote:
>
> > Hi Karl,
> >
> > IMHO, the matrix< matrix<..> > can be more or less difficult to
> > implement
> > depending on the constraints you accept. As an example, if you decide
> > that the
> > library must remain open to third party codes and handle different
> > kind of
> > matrix data storage : this is pretty easy to do with simple matrix type
> > (matrix<double>) because the return type of A(i,j) will be a simple
> > ref to
> > double (or const ref). In the 2D case, matrix< matrix<double> > A, the
> > accessor
> > operator A(i,j) will be naturally a ref to matrix<double>. And you
> > loose the
> > compatibility with third party data elements...
> > One can avoid this problem but it is not a trivial issue.
> >
> > Another point in nested matrix issue is that it can help (again IMHO)
> > to
> > validate the very impressive work the GLAS team has done with vector
> > space
> > concepts (HI Peter). For example, the 0 and 1 matrix counter-part are
> > to be
> > efficiently treated.
> >
> > Last point is maybe not a so general issue : In my case, I have to
> > handle sparse
> > matrix with a complicated but highly structured sparsity pattern. This
> > pattern
> > can be express with a reduced set of simple sparsity patterns : e.g A
> > =Dense<Sparse<Diagonal..> > > + Sparse<Tridiagonal<Dense<..> > >
> >
> >
> > Best
> >
> > Laurent
> >
> >
> >> I think this is an important point. Toon and I have thought about this
> >> for a long time. We will certainly talk about this in detail.
> >> Providing
> >> the container matrix< matrix<T> > is not hard. I recall that
> >> algorithms
> >> are application/interpretation dependent, and we had a lively
> >> discussion
> >> on the list on this topic. So, I agree that 'some' support is needed,
> >> although it is not yet clear to me what 'some' here means.
> >>
> >> Best,
> >
> >
> >
> > _______________________________________________
> > glas mailing list
> > [hidden email]
> > http://lists.boost.org/mailman/listinfo.cgi/glas
> >
> ------------
> Peter Gottschling, Ph.D.
> Research Associate
> Open Systems Laboratory
> Indiana University
> 135 Lindley Hall
> Bloomington, IN 47405
> Tel.: +1-812-855-3608   Fax: +1-812-856-0853
> http://www.osl.iu.edu/~pgottsch
>
> _______________________________________________
> glas mailing list
> [hidden email]
> http://lists.boost.org/mailman/listinfo.cgi/glas
>


_______________________________________________
glas mailing list
[hidden email]
http://lists.boost.org/mailman/listinfo.cgi/glas
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: multi-dimensional arrays?

Peter Gottschling
Hi Laurent,

Sorry for my delay too.  Thank you for the detailed explanation.  Your
issues are much clearer to me now.  Still I don't know how to come up
with a big picture where everything fits in.

On 22.12.2006, at 12:15, [hidden email] wrote:

> Hi Peter, Hi Karl, sorry for the delay.
>
> Since my own lib (with a new name LEGOLAS++) handles nested matrices
> and vectors
> I can reply when I had to address the issues you have mentioned. Up o
> now, I did
> not address the difficult and important matrix<->matrix (BLAS3) issue
> in the
> general case so I do not pretend to hold all the answers...
>
....
snip

>
>
>
>> Next question, do we need special concepts?  Say e.g.
>> RecursiveTransposable for a recursive function transpose
>
> To me, every matrix can be transposed... So no additional concept
>
I choose Transposable for the sake of simplicity.  Practically I can't
picture a matrix that is not transposable.  However, good to know that
no extra concepts are needed.

An operation that is trickier is inversion of heterogeneously nested
matrices.  Informally, it requires that the blocks/elements on the
diagonals are invertible and that all products for the update are
defined.  Writing this formally with concepts will need some attention.

>> would recursively descent and break if it reaches a scalar value that
>> has no value_type.
>
> In LEGOLAS++, every matrix elements type (especially scalars) comes
> with a
> companion ElementDataDriver class which is used to ensure that every
> elements
> will model the ElementMatrix Concept. This concept includes the
> definition of
> transpose function, dot product, zero,...
>
I agree that this concept ElementMatrix including scalars is the key to
the define the recursive algorithms formally.  Can you show us your
definition?  To increase genericity of the algorithms it might be an
idea to split the concept into multiple, one for each operation.  
Probably not all operations are used in each algorithm.  In order to
keep requirement lists of algorithms short, one can than group these
concepts by refinement.

> Again Peter, I think that the matrix-matrix issue addressed by GLAS is
> much more
> difficult.
>
Did I say it's easy?  I'm just searching for a prospective where as
much as possible is covered.

Best wishes,
Peter

>>> Extending this to block-matrices,
>
> Strictly speaking I do not deal with block-matrices but only with
> Matrix with a
> ElementMatrix concept which is modeled by matrices.
> The same thing for vectors.
>
> Hi Karl :
>
> I do not deal with blocked algorithm used in optimized xgemm
> implementation for
> Dense matrices. While this issue can be seen as an implementation
> policy, the
> sparsity pattern of a matrix (chosen be the user) is NOT an
> implementation
> detail which should be abstracted away. On the contrary I try to
> design my lib
> to allow a very detailed description of this sparsity
> pattern.
>
> To me, a matrix and a 2D vectors are very different types. Matrix are
> mainly
> linear operator acting on Vectors.
> The container of a dense matrix can eventually be a 2D array (vector)
> but the
> semantic level of a matrix and the corresponding container should be
> strictly
> separated since a matrix does not imply a container. Permutation
> matrices,
> finite different stencils etc.. are matrix and usually are not stored
> at all...
>
> Best,
>
> Laurent
>
> Quoting Peter Gottschling <[hidden email]>:
>
>> Hi Laurent,
>>
>> Concerning nested matrix and vector types, we need to ask ourselves
>> first if we need special algorithms to handle it or if a least in some
>> cases the operators on the values already use the right algorithm.
>>
>> Second, is the use of operators efficient enough performance-wise?  In
>> a matrix multiplication c(i, k)+= a(i, j) * b(j, k) is correct for any
>> type with consistent operators but not very efficient if a, b, and c
>> are matrices. Unless we have really sophisticated expression
>> templates.
>>
>> Next question, do we need special concepts?  Say e.g.
>> RecursiveTransposable for a recursive function transpose
>>
>> How would it look like?
>>
>> concept RecursiveTransposable<typename Matrix>
>> {
>>     where RecursiveTransposable<Matrix::value_type>;
>>     // ...
>> };
>>
>> would recursively descent and break if it reaches a scalar value that
>> has no value_type.
>>
>> Conversely, something like:
>>
>> auto // ???
>> concept RecursiveTransposable<typename Matrix>
>> {
>>      Matrix transpose(Matrix );
>> }
>>
>> Now we say this is defined for arithmetic scalars
>>
>> template<typename T>
>>      where Arithmetic<T>
>> concept_map RecursiveTransposable<T> {}
>>
>> And now enable it from bottom up:
>>
>> template<typename T>
>>      where RecursiveTransposable<T>
>> concept_map RecursiveTransposable<matrix<T> > {}
>>
>>
>> Extending this to block-matrices,
>> (A B)
>> (C D)
>> where A, B, C, and D are matrices of different type. Say we have a
>> type
>> for heterogeneous blocks
>>
>> template <typename MA, typename MB, typename MC, typename MD>
>> hb_matrix { ... };
>>
>> We can define:
>> template <typename MA, typename MB, typename MC, typename MD>
>>      where RecursiveTransposable<MA>
>>          &&  RecursiveTransposable<MB>
>>          &&  RecursiveTransposable<MC>
>>          &&  RecursiveTransposable<MD>
>> concept_map RecursiveTransposable<hb_matrix<MA, MB, MC, MD> > {}
>>
>> However, this are only some preliminary thoughts.  We need to look,
>> which algorithms we want to implement recursively.  And it looks like
>> that there can be a great difference between a syntactically correct
>> implementation and a high-performance implementation.
>>
>> Cheers,
>> Peter
>>
>> On 15.12.2006, at 05:49, [hidden email] wrote:
>>
>>> Hi Karl,
>>>
>>> IMHO, the matrix< matrix<..> > can be more or less difficult to
>>> implement
>>> depending on the constraints you accept. As an example, if you decide
>>> that the
>>> library must remain open to third party codes and handle different
>>> kind of
>>> matrix data storage : this is pretty easy to do with simple matrix
>>> type
>>> (matrix<double>) because the return type of A(i,j) will be a simple
>>> ref to
>>> double (or const ref). In the 2D case, matrix< matrix<double> > A,
>>> the
>>> accessor
>>> operator A(i,j) will be naturally a ref to matrix<double>. And you
>>> loose the
>>> compatibility with third party data elements...
>>> One can avoid this problem but it is not a trivial issue.
>>>
>>> Another point in nested matrix issue is that it can help (again IMHO)
>>> to
>>> validate the very impressive work the GLAS team has done with vector
>>> space
>>> concepts (HI Peter). For example, the 0 and 1 matrix counter-part are
>>> to be
>>> efficiently treated.
>>>
>>> Last point is maybe not a so general issue : In my case, I have to
>>> handle sparse
>>> matrix with a complicated but highly structured sparsity pattern.
>>> This
>>> pattern
>>> can be express with a reduced set of simple sparsity patterns : e.g A
>>> =Dense<Sparse<Diagonal..> > > + Sparse<Tridiagonal<Dense<..> > >
>>>
>>>
>>> Best
>>>
>>> Laurent
>>>
>>>
>>>> I think this is an important point. Toon and I have thought about
>>>> this
>>>> for a long time. We will certainly talk about this in detail.
>>>> Providing
>>>> the container matrix< matrix<T> > is not hard. I recall that
>>>> algorithms
>>>> are application/interpretation dependent, and we had a lively
>>>> discussion
>>>> on the list on this topic. So, I agree that 'some' support is
>>>> needed,
>>>> although it is not yet clear to me what 'some' here means.
>>>>
>>>> Best,
>>>
>>>
>>>
>>> _______________________________________________
>>> glas mailing list
>>> [hidden email]
>>> http://lists.boost.org/mailman/listinfo.cgi/glas
>>>
>> ------------
>> Peter Gottschling, Ph.D.
>> Research Associate
>> Open Systems Laboratory
>> Indiana University
>> 135 Lindley Hall
>> Bloomington, IN 47405
>> Tel.: +1-812-855-3608   Fax: +1-812-856-0853
>> http://www.osl.iu.edu/~pgottsch
>>
>> _______________________________________________
>> glas mailing list
>> [hidden email]
>> http://lists.boost.org/mailman/listinfo.cgi/glas
>>
>
>
> _______________________________________________
> glas mailing list
> [hidden email]
> http://lists.boost.org/mailman/listinfo.cgi/glas
>
------------
Peter Gottschling
Research Associate
Open Systems Laboratory
Indiana University
135 Lindley Hall
Bloomington, IN 47405
Tel.: +1-812-855-3608   Fax: +1-812-856-0853
http://www.osl.iu.edu/~pgottsch

_______________________________________________
glas mailing list
[hidden email]
http://lists.boost.org/mailman/listinfo.cgi/glas
Loading...