Am Do., 7. Jan. 2021 um 14:24 Uhr schrieb Janek Kozicki via Boost <

[hidden email]>:

Yes, you can. We work with *std::vector* as the underlying base type.

template<class T, class F = first_order, class A =

std::vector<T,std::allocator<T>> >

class tensor

Just specialize your tensor type with your own allocator.

You can also provide your own base type instead of *std::vector* but

it must model the sequential container concept.

using other_tensor_t =

tensor<other_numeric_type,first_order,std::vector<T,other_allocator>>;

In fact maybe during past 5 years some better C++ interface for

> libfftw appeared. I will have to check :)

> maybe even boost library has an FFT interface? :)

>

> If not then I will need to refactor this crude FFTW3_Allocator memory

> allocator.

>

> > using tensor_t = boost::numeric::ublas::tensor<float,format_t>;

> > auto A = tensor_t(shape{3,4,2},2);auto ap = A.data();

> > You can also use the standard c++ library for convenience:

> > std::for_each(A.begin(), A.end(), [](auto& a){ ++a; });

> > If you do not want to use the Einstein-notation, you can as well use

> either

>

> I'm definitely going to use the Einstein notation :)

>

Glad it helps.

Please note that contractions are not optimized. For small dimensions, the

runtime should be tolerable.

For larger dimensions and rank, please let me know - we are working on fast

tensor contractions (e.g.

https://github.com/bassoy/ttv)

>

> > the prod function:

> > // C3(i,l1,l2) = A(i,j,k)*T1(l1,j)*T2(l2,k);

> > q = 3u;

> > tensor_t C3 =

> prod(prod(A,matrix_t(m+1,n[q-2],1),q-1),matrix_t(m+2,n[q-1],1),q);

> >

> > "prod" uses internally a C-like interface which will not allocate memory

> > at all.

> >

> > ttm(m, p,

> > c.data(), c.extents().data(), c.strides().data(),

> > a.data(), a.extents().data(), a.strides().data(),

> > bb, nb.data(), wb.data());

>

> I have another question about indexing the tensor.

>

> Can I have a custom index redirection? I mean, when I type code to

> access zero-th element of 1D array:

>

> tensor[0] = ... ;

>

Yes, random access with read and write.

You can access tensor data with a single index like *t[0] *or*

t[(n*m*...*k)/2]*.

You can also use multi-indices for convenience, e.g. *t.at

<

http://t.at>(5,2,4,6)*

I am not really sure if I understand what you mean by *index redirection*.

You can always advance pointers and iterators of tensor.

> * zero-th element: in the middle of reserved memory.

>

* half: at the end of reserved memory

> * half+1: at the beginning of reserved memory

>

>

If I understand you correctly, you would like to create two valid ranges.

Let t be a p-order tensor *t* and n, m, k be its dimensions, where p =

n*m*...*k with n,m,...,k > 1

Then *[first0,last0)* with

*auto first0 = std::advance(std::begin(t), p/2); *// auto first0 =

t.begin()+p/2;

*auto last0 = std::end(t);*

should be your first valid range.

Then *[first1,last1)*

*auto first1 = std::begin(t);*

*auto last1 = std::advance(std::begin(t), p/2-1);*

should be your second valid rang.

>

> then all other parts of my algorithm will not need to be modified,

> because they solely work with iterators.

>

If you need you can also work with iterators:

*std::advance(std::begin(t),5)* as you are used with *std::vector* .

Note (for simplified interface with the fft library you can the shifting

and advancing using raw pointers of the tensor *std::advance(t.data(),5) *or

simply *t.data()+5.*

> (Also I need to check if it's now possible to tell FFT to treat data

> differently, it wasn't possible 5 years ago, then I wouldn't need all

> these complications with indexing)

>

> Thank you, your code is very promising.

> Janek

>

Thanks.

Hope I could help you.

>

>

> --

> # Janek Kozicki

http://janek.kozicki.pl/>

> _______________________________________________

> Unsubscribe & other changes:

>

http://lists.boost.org/mailman/listinfo.cgi/boost>

_______________________________________________

Unsubscribe & other changes:

http://lists.boost.org/mailman/listinfo.cgi/boost