(Newbie) Embedded programming, memory pools, and allocate_shared()

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

(Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
I apologize for bothering the mailing-list with this newbie question, but after searching the Net for several days, I still can't find the answers I'm looking for.

I am considering using Boost for a project I'm doing on an embedded system.
I can't seem to find a lot of discussion on the particulars of Boost in embedded systems, just a few here and there.
The biggest relevant limitation in embedded programming is on dynamic memory allocation -- the usual pattern is to allocate all necessary memory in "startup" phases, e.g. launching the program, loading the next playing-field in a game application, etc.

I believe I can do what I want with allocate_shared() and fixed-size memory pools.
The first big issue I have is trying to figure out the size of the objects that'll be allocated from the pool.
I don't want to just allocate a raw block of memory; I want to allocate a block with space for a certain number of objects of a homogeneous type.
But an object_pool for a type T doesn't accurately reflect what allocate_shared() is going to try to create.
From much digging around, it looks like it's actually creating a subclass of boost::detail::sp_counted_base, e.g. boost::detail::sp_counted_impl_pd<P,D>, boost::detail::sp_counted_impl_pda<P,D,A>, etc.
It seems like I'm not supposed to refer to that class directly, but I don't know how else to get the size of individual objects for my pool-allocator.
Also, navigating all the template-parameters needed by these classes is proving to be mind-twisting. Probably just because I'm seeing it for the first time.

Another example of where I need to track down the implementation class is with async-operation handlers in Boost::ASIO, e.g. completion handlers for socket's async_read_some(), generic handlers passed to io_service::post(), and so on. In the case of socket::async_read_some(), there's an example in boost/doc/html/boost_asio/example/cpp03/allocation/server.cpp which shows how to allocate completion-handlers from a pool, but not how to get the size of the necessary completion-handler class. It appears the class is boost::asio::detail::reactive_socket_recv_op<>. Again, this doesn't seem to be a public class, nor can I find a nice public typedef for it.

Can anyone advise me on how to use Boost in this way?
All of the Boost memory-pools I've seen so far either just allocate chunks of raw memory, or they allocate non-shared variants of objects.
A memory-pool with a given number of spaces for the final object type seems like a defensible, traditional solution, but it doesn't seem that Boost makes that easy.
Am I way off base here?

Thanks for any help.

-Steven


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: (Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
On Tue, Dec 5, 2017 at 4:18 PM, Steven Boswell II via Boost-users
<[hidden email]> wrote:
> I apologize for bothering the mailing-list with this newbie question, but
> after searching the Net for several days, I still can't find the answers I'm
> looking for.

I don't know about "embedded systems" in general. That's a pretty wide
swath. Are we talking about Motorola? ARM?

I've managed to add Boost support to an ARM based embedded ArchLinux.
Works pretty well. The trick is finding the right host/target
combination for cross compilation. All very doable, and there are
build environments available for just about any combination.

> I am considering using Boost for a project I'm doing on an embedded system.
> I can't seem to find a lot of discussion on the particulars of Boost in
> embedded systems, just a few here and there.
> The biggest relevant limitation in embedded programming is on dynamic memory
> allocation -- the usual pattern is to allocate all necessary memory in
> "startup" phases, e.g. launching the program, loading the next playing-field
> in a game application, etc.
>
> I believe I can do what I want with allocate_shared() and fixed-size memory
> pools.
> The first big issue I have is trying to figure out the size of the objects
> that'll be allocated from the pool.
> I don't want to just allocate a raw block of memory; I want to allocate a
> block with space for a certain number of objects of a homogeneous type.
> But an object_pool for a type T doesn't accurately reflect what
> allocate_shared() is going to try to create.
> From much digging around, it looks like it's actually creating a subclass of
> boost::detail::sp_counted_base, e.g. boost::detail::sp_counted_impl_pd<P,D>,
> boost::detail::sp_counted_impl_pda<P,D,A>, etc.
> It seems like I'm not supposed to refer to that class directly, but I don't
> know how else to get the size of individual objects for my pool-allocator.
> Also, navigating all the template-parameters needed by these classes is
> proving to be mind-twisting. Probably just because I'm seeing it for the
> first time.
>
> Another example of where I need to track down the implementation class is
> with async-operation handlers in Boost::ASIO, e.g. completion handlers for
> socket's async_read_some(), generic handlers passed to io_service::post(),
> and so on. In the case of socket::async_read_some(), there's an example in
> boost/doc/html/boost_asio/example/cpp03/allocation/server.cpp which shows
> how to allocate completion-handlers from a pool, but not how to get the size
> of the necessary completion-handler class. It appears the class is
> boost::asio::detail::reactive_socket_recv_op<>. Again, this doesn't seem to
> be a public class, nor can I find a nice public typedef for it.
>
> Can anyone advise me on how to use Boost in this way?
> All of the Boost memory-pools I've seen so far either just allocate chunks
> of raw memory, or they allocate non-shared variants of objects.
> A memory-pool with a given number of spaces for the final object type seems
> like a defensible, traditional solution, but it doesn't seem that Boost
> makes that easy.
> Am I way off base here?
>
> Thanks for any help.
>
> -Steven
>
>
> _______________________________________________
> Boost-users mailing list
> [hidden email]
> https://lists.boost.org/mailman/listinfo.cgi/boost-users
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: (Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
On Tuesday, December 5, 2017 at 10:33 PM, Michael Powell via Boost-users <[hidden email]> wrote:

>On Tue, Dec 5, 2017 at 4:18 PM, Steven Boswell II via Boost-users <[hidden email]> wrote:
>>I apologize for bothering the mailing-list with this newbie question,
>>but after searching the Net for several days, I still can't find the
>>answers I'm looking for.
>
>I don't know about "embedded systems" in general.  That's a pretty
>wide swath.  Are we talking about Motorola?  ARM?
>
>I've managed to add Boost support to an ARM based embedded ArchLinux.
>Works pretty well.  The trick is finding the right host/target
>combination for cross compilation.  All very doable, and there are
>build environments available for just about any combination.

I've already built boost in the embedded environment.
That's not my issue.
The aforementioned questions about memory-pools and implementation-classes are my issue.
Still, thanks for your response. Hopefully someone can help me.

-Steven


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: (Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
In reply to this post by Boost - Users mailing list
So far, the answer seems to involve referring to boost implementation details, which doesn't thrill me.
I just wanted to create some memory-pools that could be used to allocate objects referred to through boost:shared_ptr<>.
I want to avoid non-pool memory-allocation, and for each shared-object allocation to involve one (and only one) allocation from one pool, i.e. to be as predictable and compact as possible.

This hand-written definition for a pool and an allocator seem to do what I want:

----------

template<typename T>
class my_pool
{
private:
  char *rawStorage;
  T *storage;
  T *nextFree;
public:
  my_pool(size_t S)
    : storage(nullptr), nextFree(nullptr)
  {
    // Allocate memory.
    rawStorage = new char[sizeof(T) * S];
    storage = reinterpret_cast<T*>( rawStorage );

    // All objects are free, initially.
    for (size_t i = 0; i < S; ++i)
    {
      T **pFreeObject = reinterpret_cast<T**>(&storage[i]);
      *pFreeObject = nextFree;
      nextFree = reinterpret_cast<T*>(&storage[i]);
    }
  }

  ~my_pool()
  {
    // TODO: Verify that all allocated objects have been freed.
    delete[] rawStorage;
  }

  T *allocate()
  {
    T *freeObject = nextFree;
    if (nextFree != nullptr)
    {
      T **pFreeObject = reinterpret_cast<T**>(nextFree);
      nextFree = *pFreeObject;
    }
    return freeObject;
  }

  void deallocate(T *obj)
  {
    T **pFreeObject = reinterpret_cast<T**>(obj);
    *pFreeObject = nextFree;
    nextFree = obj;
  }
};

template<typename T>
class my_allocator : public at_boost::detail::sp_ms_deleter<T>
{
public:
  typedef at_boost::detail::sp_counted_impl_pda<T *, at_boost::detail::sp_ms_deleter<T>, my_allocator<T> > shared_type;
  typedef my_pool<shared_type> pool_type;
private:
  pool_type &m_rPool;
  void operator=(my_allocator const &other); // Disallow assignment.
public:
  explicit my_allocator(pool_type &a_rPool) : m_rPool(a_rPool) { }
  my_allocator(my_allocator const &other) : m_rPool(other.m_rPool) { }

  template <typename U>
  struct rebind
  {
    typedef my_allocator/*<U>*/ other;
  };

  shared_type *allocate(int iCount)
  {
    if (iCount == 1) // Only expected to allocate single objects.
      return m_rPool.allocate();
    return nullptr;
  }
  void deallocate(shared_type *obj, size_t iCount)
  {
    m_rPool.deallocate(obj);
  }
};

----------

It still needs polish, but it's just a proof of concept.

Here's some code that uses the pool/allocator:

----------
my_allocator<int>::pool_type myPool (1024);
my_allocator<int> myAllocator(myPool);
boost::shared_ptr<int> pTest = boost::allocate_shared<int, my_allocator<int> >(myAllocator);
int iVal0 = *pTest;
*pTest = sizeof(my_allocator<int>::shared_type);
int iVal1 = *pTest;
----------

No big deal. Mostly, the test is to make sure that ::operator new() doesn't get called after the construction of the pool, and that seems to be true.

I'm surprised that my expected usage of memory-pools isn't more common.
Maybe using boost in an environment with constraints on dynamic memory allocation is unusual?

Also, at some point, I need to figure out why my definition of my_allocator<T>::rebind can't use <U> in the way that the standard boost allocators do.
I get an incomprehensible error-message when I do that:

----------
boost\smart_ptr\detail\sp_counted_impl.hpp(237): error C2664: 'my_allocator<T>::my_allocator(my_pool<at_boost::detail::sp_counted_impl_pda<P,D,A>> &)' : cannot convert parameter 1 from 'my_allocator<T>' to 'my_pool<T> &'
----------

Any comments on what I'm trying to do here?

-Steven


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: (Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
Attached to this e-mail is a zip file containing my code so far.
It's a modification of the sample code normally found at boost/libs/asio/example/cpp03/allocation/server.cpp .
To use the enclosed Visual Studio solution, you'll need to set two environment variables.
BOOST_HOME needs to be set to your Boost directory.
BOOST_LIB needs to be set to where you've built the Boost DLLs.
This project was built/run against Boost 1.64.
Hopefully I didn't screw anything up while "depersonalizing" it.

fixed_pool_allocators.h contains the code I found necessary to allocate async-read/write handlers from memory-pools.
The idea is to create fixed-size pools with items that are the exact size needed by their clients.
I think that's a perfectly normal thing to want when trying to avoid dynamic memory allocation in an embedded programming environment.
But as you can see, Boost doesn't exactly make it easy to find the typedefs that the pools need to determine the required amounts of space.



I'm hoping someone can tell me that I should be using some simpler typedefs, provided by somewhere in Boost that I haven't found yet.
Barring that, maybe someone can tell me that what I want to do with memory-pools here is too unconventional to expect Boost to support it easily.
Or maybe someone will tell me that this isn't the right place to ask questions like these, and point me to somewhere more appropriate.
I expected the boost-users mailing list to be the ideal place to ask such questions.

-Steven


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users

BoostAsioTest.zip (14K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: (Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
On Mon, Dec 11, 2017 at 1:43 PM, Steven Boswell II via Boost-users
<[hidden email]> wrote:
> Attached to this e-mail is a zip file containing my code so far.
> It's a modification of the sample code normally found at
> boost/libs/asio/example/cpp03/allocation/server.cpp .
> To use the enclosed Visual Studio solution, you'll need to set two
> environment variables.
> BOOST_HOME needs to be set to your Boost directory.
> BOOST_LIB needs to be set to where you've built the Boost DLLs.
> This project was built/run against Boost 1.64.
> Hopefully I didn't screw anything up while "depersonalizing" it.

What are you targeting? You mentioned C++/CLI, .NET, etc. Windows Embedded?

> fixed_pool_allocators.h contains the code I found necessary to allocate
> async-read/write handlers from memory-pools.
> The idea is to create fixed-size pools with items that are the exact size
> needed by their clients.
> I think that's a perfectly normal thing to want when trying to avoid dynamic
> memory allocation in an embedded programming environment.
> But as you can see, Boost doesn't exactly make it easy to find the typedefs
> that the pools need to determine the required amounts of space.
>
>
>
> I'm hoping someone can tell me that I should be using some simpler typedefs,
> provided by somewhere in Boost that I haven't found yet.
> Barring that, maybe someone can tell me that what I want to do with
> memory-pools here is too unconventional to expect Boost to support it
> easily.
> Or maybe someone will tell me that this isn't the right place to ask
> questions like these, and point me to somewhere more appropriate.
> I expected the boost-users mailing list to be the ideal place to ask such
> questions.
>
> -Steven
>
>
> _______________________________________________
> Boost-users mailing list
> [hidden email]
> https://lists.boost.org/mailman/listinfo.cgi/boost-users
>
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: (Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
On Mon, Dec 11, 2017 at 5:05PM, Michael Powell via Boost-users <[hidden email]> wrote:


>What are you targeting? You mentioned C++/CLI, .NET, etc. Windows Embedded?

I'm not using C++/CLI in this project.
The only time I mentioned C++/CLI was when trying to answer someone else's question.

The embedded device is something my employer manufactures, but that's irrelevant -- my question is about having to refer to lots of implementation/detail classes just to do something I consider simple and straightforward, i.e. a memory-pool that holds a fixed number of objects of a specific type, which is a pretty standard cliche for embedded programming.

In further tests, I've found that pending socket operations (e.g. the result of async_read_some() and async_write() calls) lead to dynamic memory allocation from within Boost::ASIO, and there's presently no way to override that, though I'm trying to lay in the groundwork for that right now. (It mostly means adding a template-parameter to several classes to identify the allocator to use.) So far, it doesn't look like Boost::ASIO has a complete solution to allocate all necessary memory from pools, and I don't know how much work it'll take to do that, or if such work will be accepted back into the project.

-Steven


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: (Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
On Tue, Dec 12, 2017 at 9:28 AM, Steven Boswell II via Boost-users
<[hidden email]> wrote:
> On Mon, Dec 11, 2017 at 5:05PM, Michael Powell via Boost-users
> <[hidden email]> wrote:
>
>
>>What are you targeting? You mentioned C++/CLI, .NET, etc. Windows Embedded?
>
> I'm not using C++/CLI in this project.
> The only time I mentioned C++/CLI was when trying to answer someone else's
> question.

My mistake. I must be confusing threads.

> The embedded device is something my employer manufactures, but that's
> irrelevant -- my question is about having to refer to lots of
> implementation/detail classes just to do something I consider simple and
> straightforward, i.e. a memory-pool that holds a fixed number of objects of
> a specific type, which is a pretty standard cliche for embedded programming.

I considered Boost.Asio once upon a time for one of my embedded
projects a couple of years ago, but I couldn't get it to work quite
right, so I decided to roll my own messaging framework.

> In further tests, I've found that pending socket operations (e.g. the result
> of async_read_some() and async_write() calls) lead to dynamic memory
> allocation from within Boost::ASIO, and there's presently no way to override
> that, though I'm trying to lay in the groundwork for that right now. (It
> mostly means adding a template-parameter to several classes to identify the
> allocator to use.) So far, it doesn't look like Boost::ASIO has a complete
> solution to allocate all necessary memory from pools, and I don't know how
> much work it'll take to do that, or if such work will be accepted back into
> the project.

Beyond that, I don't know. The cost of doing business the Boost way
includes comprehension of linked modules. That's just the way it is.
You may not need ALL modules, but you may incur SOME of them,
depending on how broad a functional dependence you adopt. You may be
able to defer some of that cost depending if you can link to static or
shared modules.

> -Steven
>
>
> _______________________________________________
> Boost-users mailing list
> [hidden email]
> https://lists.boost.org/mailman/listinfo.cgi/boost-users
>
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: (Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
On Mon, Dec 12, 2017 at 7:42AM, Michael Powell via Boost-users <[hidden email]> wrote:
>I considered Boost.Asio once upon a time for one of my embedded
>projects a couple of years ago, but I couldn't get it to work quite
>right, so I decided to roll my own messaging framework.

Sadly, that's the direction I'll probably have to go.

Providing an allocator for pending socket operations involves having to provide the allocator in places where it's not feasible without forcing heavy changes on client code, e.g. asio_handler_allocate() and asio_handler_deallocate(). I can add a boost::asio::io_service<A> parameter to those 2 functions, but then they have to be templated too. Also, Boost::ASIO makes heavy use of typedefs that can't easily be made templated, not without requiring C++11's "using" feature.

It appears that Boost::ASIO won't work in an embedded context any time soon.

It's too bad -- my upcoming project is going to make heavy use of asynchronous I/O, circular-buffers, and lock-free queues, and I really didn't want to have to roll my own.
I hate reinventing the wheel; I have enough to do as it is.

Thanks for your help.

-Steven


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: (Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
On Tue, Dec 12, 2017 at 10:14 AM, Steven Boswell II via Boost-users
<[hidden email]> wrote:

> On Mon, Dec 12, 2017 at 7:42AM, Michael Powell via Boost-users
> <[hidden email]> wrote:
>>I considered Boost.Asio once upon a time for one of my embedded
>>projects a couple of years ago, but I couldn't get it to work quite
>>right, so I decided to roll my own messaging framework.
>
> Sadly, that's the direction I'll probably have to go.
>
> Providing an allocator for pending socket operations involves having to
> provide the allocator in places where it's not feasible without forcing
> heavy changes on client code, e.g. asio_handler_allocate() and
> asio_handler_deallocate(). I can add a boost::asio::io_service<A> parameter
> to those 2 functions, but then they have to be templated too. Also,
> Boost::ASIO makes heavy use of typedefs that can't easily be made templated,
> not without requiring C++11's "using" feature.
>
> It appears that Boost::ASIO won't work in an embedded context any time soon.

I spent a little time with it at the time, and if memory serves I was
receiving segfault crashes. I didn't want to spend a lot of time
persuading Boost/Asio, so I decided to go the RYO route.

> It's too bad -- my upcoming project is going to make heavy use of
> asynchronous I/O, circular-buffers, and lock-free queues, and I really
> didn't want to have to roll my own.
> I hate reinventing the wheel; I have enough to do as it is.

I agree, it's a pain, but it's not so bad. I got it working with a
fairly simple byte level protocol, it was async as far as the app was
concerned, running in a mutex guarded thread, and I managed to make it
work for UDP as well as for TCP.

> Thanks for your help.
>
> -Steven
>
>
> _______________________________________________
> Boost-users mailing list
> [hidden email]
> https://lists.boost.org/mailman/listinfo.cgi/boost-users
>
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: (Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
In reply to this post by Boost - Users mailing list
On 13/12/2017 04:14, Steven Boswell II wrote:
> On Mon, Dec 12, 2017 at 7:42AM, Michael Powell wrote:
>  >I considered Boost.Asio once upon a time for one of my embedded
>  >projects a couple of years ago, but I couldn't get it to work quite
>  >right, so I decided to roll my own messaging framework.
>
> Sadly, that's the direction I'll probably have to go.
[...]
> It appears that Boost::ASIO won't work in an embedded context any time soon.
>
> It's too bad -- my upcoming project is going to make heavy use of
> asynchronous I/O, circular-buffers, and lock-free queues, and I really
> didn't want to have to roll my own.
> I hate reinventing the wheel; I have enough to do as it is.

I ended up rolling my own code for asynchronous serial I/O at one point
because ASIO was a little too mutex-happy, which was causing latency
issues.  Although conversely my version was probably more malloc-happy
than ASIO is.  (And it used boost::function rather than templating all
the things, which is probably another performance negative, albeit one
that didn't matter as much to the application at hand.)

But most memory allocators are decently fast nowadays, to the point
where having memory allocations on threads that are doing socket I/O
will probably be dominated by the I/O rather than the allocation.  You
just need to make sure that you're using a good per-thread allocator and
keep the I/O on separate threads from anything that needs to be more
realtime, and then you should be ok even if ASIO does allocate.

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: (Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
On Tue, Dec 12, 2017 at 5:50PM, Gavin Lambert wrote:
>I ended up rolling my own code for asynchronous serial I/O at one point
>because ASIO was a little too mutex-happy, which was causing latency
>issues.
>[...]
>But most memory allocators are decently fast nowadays, to the point
>where having memory allocations on threads that are doing socket I/O
>will probably be dominated by the I/O rather than the allocation.

Thanks for warning me that it's somewhat mutex-happy.
Perhaps that can be fixed, but that'd be one more thing I'd have to do.

And my motivation for controlling memory-allocation wasn't speed, it was to make sure I don't exceed my budget, or cause stability problems (e.g. memory fragmentation), in an embedded environment.

-Steven


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: (Newbie) Embedded programming, memory pools, and allocate_shared()

Boost - Users mailing list
On Wed, Dec 13, 2017 at 9:08 AM, Steven Boswell II via Boost-users
<[hidden email]> wrote:

> On Tue, Dec 12, 2017 at 5:50PM, Gavin Lambert wrote:
>>I ended up rolling my own code for asynchronous serial I/O at one point
>>because ASIO was a little too mutex-happy, which was causing latency
>>issues.
>>[...]
>>But most memory allocators are decently fast nowadays, to the point
>>where having memory allocations on threads that are doing socket I/O
>>will probably be dominated by the I/O rather than the allocation.
>
> Thanks for warning me that it's somewhat mutex-happy.
> Perhaps that can be fixed, but that'd be one more thing I'd have to do.
>
> And my motivation for controlling memory-allocation wasn't speed, it was to
> make sure I don't exceed my budget, or cause stability problems (e.g. memory
> fragmentation), in an embedded environment.

I appreciate that. Not my first rodeo, there are always trade offs to be made.

> -Steven
>
>
> _______________________________________________
> Boost-users mailing list
> [hidden email]
> https://lists.boost.org/mailman/listinfo.cgi/boost-users
>
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users