I'd like to restart the discussion on how conversions will work in
Boost.Python v3. Here's the starting point: I'd like to see support for static, template-based conversions. These would be defined by [partial-]specializing a traits class, and I tend to think they should only be invoked after attempting all registry-based conversions. Users would have to include the same headers in groups of interdependent modules to avoid specializing the same traits class multiple times in different ways; I can't think of a way to protect them from this, but template-based specializations are a sufficiently advanced featured that I'm comfortable leaving it up to users to avoid this problem. We've had some discussion of allowing different modules to have different registries (in addition to a global registry shared across modules). Leaving aside implementation questions, I have a little survey for interested parties: 1) Under what circumstances would you want a conversion that is completely limited to a specific module (or a group of modules that explicitly declare it)? 2) Under what circumstances would you want a conversion to look in a module-specific registry, and then fall back to the global registry? 3) Considering that we will have a "best-match" overloading system, what should take precedence, an inexact match in a module-specific registry, or an exact match in a global registry? (Clearly this is a moot point for to-Python conversion). Finally, can anyone give me a reason why having a global registry can lead to a violation of the "One Definition Rule"? This was alluded to many times in the earlier discussion, and there's no doubt that a global registry may lead to unexpected (from a given module's perspective) behavior - but I do not understand the implication that the global registry can result in formally undefined behavior by violating the ODR. Thanks! Jim _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
Jim,
My answer for 1), 2) for module specific behavior would be that I would probably not use module specific behavior. In my current project I separate the code into multiple BOOST_PYTHON_MODULE. So I would want to use project specific behavior and not behavior specific to the BOOST_PYTHON_MODULE. I could reorganize the code, but I kind of like the idea of separating the big python module into several sub-modules. As for project specific behavior, I would actually want 2) always, and I would want 1) optionally. So I would like to enable or disable 1) either in the build tool or with some global python variables. As for 3) I would prefer the exact match from the global registry. I assume that this case would appear if a user overloads a function from python. -Holger On Mon, Sep 19, 2011 at 23:03, Jim Bosch <[hidden email]> wrote: > I'd like to restart the discussion on how conversions will work in > Boost.Python v3. Here's the starting point: > > I'd like to see support for static, template-based conversions. These would > be defined by [partial-]specializing a traits class, and I tend to think > they should only be invoked after attempting all registry-based conversions. > Users would have to include the same headers in groups of interdependent > modules to avoid specializing the same traits class multiple times in > different ways; I can't think of a way to protect them from this, but > template-based specializations are a sufficiently advanced featured that I'm > comfortable leaving it up to users to avoid this problem. > > We've had some discussion of allowing different modules to have different > registries (in addition to a global registry shared across modules). > Leaving aside implementation questions, I have a little survey for > interested parties: > > 1) Under what circumstances would you want a conversion that is completely > limited to a specific module (or a group of modules that explicitly declare > it)? > > 2) Under what circumstances would you want a conversion to look in a > module-specific registry, and then fall back to the global registry? > > 3) Considering that we will have a "best-match" overloading system, what > should take precedence, an inexact match in a module-specific registry, or > an exact match in a global registry? (Clearly this is a moot point for > to-Python conversion). > > Finally, can anyone give me a reason why having a global registry can lead > to a violation of the "One Definition Rule"? This was alluded to many times > in the earlier discussion, and there's no doubt that a global registry may > lead to unexpected (from a given module's perspective) behavior - but I do > not understand the implication that the global registry can result in > formally undefined behavior by violating the ODR. > > Thanks! > > Jim > _______________________________________________ > Cplusplus-sig mailing list > [hidden email] > http://mail.python.org/mailman/listinfo/cplusplus-sig > Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Jim Bosch-2
On 19 Sep 2011 at 17:03, Jim Bosch wrote:
> I'd like to see support for static, template-based conversions. These > would be defined by [partial-]specializing a traits class, and I tend to > think they should only be invoked after attempting all registry-based > conversions. Surely not! You'd want to let template specialisaton be the first point of call so the compiler can compile in obvious conversions, *then* and only then do you go to a runtime registry. This also lets one override the runtime registry when needed in the local compiland. I'm not against having another set of template specialisations do something should the first set of specialisations fail, and/or the runtime registry lookup fails. > Users would have to include the same headers in groups of > interdependent modules to avoid specializing the same traits class > multiple times in different ways; I can't think of a way to protect them > from this, but template-based specializations are a sufficiently > advanced featured that I'm comfortable leaving it up to users to avoid > this problem. Just make sure what you do works with precompiled headers :) P.S.: This is trickier than it sounds. > We've had some discussion of allowing different modules to have > different registries (in addition to a global registry shared across > modules). Leaving aside implementation questions, I have a little > survey for interested parties: > > 1) Under what circumstances would you want a conversion that is > completely limited to a specific module (or a group of modules that > explicitly declare it)? Defaults to most recent in calling thread stack, but overridable using a TLS override to allow impersonation. The same mechanism usefully also takes care of multiple python interpreters too. > 2) Under what circumstances would you want a conversion to look in a > module-specific registry, and then fall back to the global registry? As above. That implies that there is no global registry, just the default registry which all module registries inherit. > 3) Considering that we will have a "best-match" overloading system, what > should take precedence, an inexact match in a module-specific registry, > or an exact match in a global registry? (Clearly this is a moot point > for to-Python conversion). The way I've always done this is to have the template metaprogramming set a series of type comparison functions which return scores. This pushes most of the scoring and weighting into the compiler and the compiler will elide any calls into the dynamic registry where the scoring makes that sensible. Makes compile times rather longer though :) The dynamic and compile-time registries can be merged easily enough, so all the runtime registry is is a set of comparison functions normally elided by the compiler in other modules. In other words, mark the inline functions as visible outside the current DLL (dllexport/visibility(default)) so the compiler will assemble complete versions for external usage. > Finally, can anyone give me a reason why having a global registry can > lead to a violation of the "One Definition Rule"? This was alluded to > many times in the earlier discussion, and there's no doubt that a global > registry may lead to unexpected (from a given module's perspective) > behavior - but I do not understand the implication that the global > registry can result in formally undefined behavior by violating the ODR. ODR only matters in practice for anything visible outside the current compiland. If compiling with GCC -fvisibility=hidden, or on any MSVC by default, you can define class foo to be anything you like so long as nothing outside the current compiland can see class foo. ODR is real important though across DLLs. If a DLL X says that class foo is one thing and DLL Y says it's something different, expect things to go very badly wrong. Hence I simply wouldn't have a global registry. It's bad design. You *have* to have per module registries and *only* per module registries. Imagine the following. Program A loads DLL B and DLL C. DLL B is dependent on DLL D which uses BPL. DLL C is dependent on DLL E which uses BPL. DLL D tells BPL that class foo is implicitly convertible with an integer. DLL E tells BPL that class foo is actually a thin wrapper for std::string. Right now with present BPL, we have to load two copies of BPL, one for DLL D and one for DLL E. They maintain separate type registries, so all is good. But what if DLL B returns a python function to Program A, which then installs it as a callback with DLL C? In the normal case, BPL code in DLL E will call into BPL code DLL D and all is well. But what if the function in DLL D throws an exception? This gets converted into a C++ exception by throwing boost::error_already_set. Now the C++ runtime must figure where to send the exception. But what is the C++ runtime supposed to do with such an exception type? It isn't allowed to see the copy of BPL living in DLL E, so it will fire the exception type into DLL D where it doesn't belong. At this point, the program will almost certainly segfault. Whatever you do with BPL in the future, it MUST support being a dependency of multiple DLLs simultaneously. It MUST know who is calling what and when, and know how to unwind everything at any particular stage. This implies that it must be 100% compatible with dlopen(RTLD_GLOBAL). As I mentioned earlier, this is a very semantically similar problem to supporting multiple python interpreters anyway with each calling into one another. You can kill two birds with the one stone as a result. HTH, Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
On 09/20/2011 11:06 AM, Niall Douglas wrote:
> On 19 Sep 2011 at 17:03, Jim Bosch wrote: > >> I'd like to see support for static, template-based conversions. These >> would be defined by [partial-]specializing a traits class, and I tend to >> think they should only be invoked after attempting all registry-based >> conversions. > > Surely not! You'd want to let template specialisaton be the first > point of call so the compiler can compile in obvious conversions, > *then* and only then do you go to a runtime registry. > > This also lets one override the runtime registry when needed in the > local compiland. I'm not against having another set of template > specialisations do something should the first set of specialisations > fail, and/or the runtime registry lookup fails. > I'd also considered having a different set of template conversions that are checked first for performance reasons, but I'd actually viewed the override preference argument from the opposite direction - once a template converter traits class has been fully specialized, you can't specialize it again differently in another module (well, maybe symbol visibility labels can get you out of that bind in practice). So it seemed a registry-based override would be the only way to override a template-based conversion, and hence the registry-based conversions would have to go first. But overall I think your proposal to just try the templates first is cleaner, because having multiple specializations of the same traits class in different modules would be a problem either way; allowing users to override the compile-time conversions with registry-based conversions is at best a poor workaround. >> Users would have to include the same headers in groups of >> interdependent modules to avoid specializing the same traits class >> multiple times in different ways; I can't think of a way to protect them >> from this, but template-based specializations are a sufficiently >> advanced featured that I'm comfortable leaving it up to users to avoid >> this problem. > > Just make sure what you do works with precompiled headers :) > > P.S.: This is trickier than it sounds. > Yuck. Precompiled headers are something I've never dealt with before, but I suppose I had better learn. >> We've had some discussion of allowing different modules to have >> different registries (in addition to a global registry shared across >> modules). Leaving aside implementation questions, I have a little >> survey for interested parties: >> >> 1) Under what circumstances would you want a conversion that is >> completely limited to a specific module (or a group of modules that >> explicitly declare it)? > > Defaults to most recent in calling thread stack, but overridable > using a TLS override to allow impersonation. > > The same mechanism usefully also takes care of multiple python > interpreters too. > I have to admit I'm only barely following you here - threads are another thing I don't deal with often. It sounds like you have a totally different option from the ones I was anticipating. Could you explain in more detail how this would work? >> 2) Under what circumstances would you want a conversion to look in a >> module-specific registry, and then fall back to the global registry? > > As above. That implies that there is no global registry, just the > default registry which all module registries inherit. (still a little confused about what you mean) >> 3) Considering that we will have a "best-match" overloading system, what >> should take precedence, an inexact match in a module-specific registry, >> or an exact match in a global registry? (Clearly this is a moot point >> for to-Python conversion). > > The way I've always done this is to have the template metaprogramming > set a series of type comparison functions which return scores. This > pushes most of the scoring and weighting into the compiler and the > compiler will elide any calls into the dynamic registry where the > scoring makes that sensible. Makes compile times rather longer though > :) > > The dynamic and compile-time registries can be merged easily enough, > so all the runtime registry is is a set of comparison functions > normally elided by the compiler in other modules. In other words, > mark the inline functions as visible outside the current DLL > (dllexport/visibility(default)) so the compiler will assemble > complete versions for external usage. > An interesting idea - avoid trying all possible conversions a runtime seems a very worthy goal, though I could also see this inflating the size of the modules. Can you point me at anything existing for an example? >> Finally, can anyone give me a reason why having a global registry can >> lead to a violation of the "One Definition Rule"? This was alluded to >> many times in the earlier discussion, and there's no doubt that a global >> registry may lead to unexpected (from a given module's perspective) >> behavior - but I do not understand the implication that the global >> registry can result in formally undefined behavior by violating the ODR. > > ODR only matters in practice for anything visible outside the current > compiland. If compiling with GCC -fvisibility=hidden, or on any MSVC > by default, you can define class foo to be anything you like so long > as nothing outside the current compiland can see class foo. > > ODR is real important though across DLLs. If a DLL X says that class > foo is one thing and DLL Y says it's something different, expect > things to go very badly wrong. Hence I simply wouldn't have a global > registry. It's bad design. You *have* to have per module registries > and *only* per module registries. > Imagine the following. Program A loads DLL B and DLL C. DLL B is > dependent on DLL D which uses BPL. DLL C is dependent on DLL E which > uses BPL. > > DLL D tells BPL that class foo is implicitly convertible with an > integer. > > DLL E tells BPL that class foo is actually a thin wrapper for > std::string. > > Right now with present BPL, we have to load two copies of BPL, one > for DLL D and one for DLL E. They maintain separate type registries, > so all is good. > > But what if DLL B returns a python function to Program A, which then > installs it as a callback with DLL C? > > In the normal case, BPL code in DLL E will call into BPL code DLL D > and all is well. > > But what if the function in DLL D throws an exception? > > This gets converted into a C++ exception by throwing > boost::error_already_set. > > Now the C++ runtime must figure where to send the exception. But what > is the C++ runtime supposed to do with such an exception type? It > isn't allowed to see the copy of BPL living in DLL E, so it will fire > the exception type into DLL D where it doesn't belong. At this point, > the program will almost certainly segfault. > > Whatever you do with BPL in the future, it MUST support being a > dependency of multiple DLLs simultaneously. It MUST know who is > calling what and when, and know how to unwind everything at any > particular stage. This implies that it must be 100% compatible with > dlopen(RTLD_GLOBAL). > > As I mentioned earlier, this is a very semantically similar problem > to supporting multiple python interpreters anyway with each calling > into one another. You can kill two birds with the one stone as a > result. If I understand your argument, it's not the global registry that causes ODR violations - it's the fact that you're trying to mimic having local registries by forcing distinct BPLs for each module, and that makes BPL symbols ambiguous. If you had a pair of modules that were happy using each other's converters, they would do the standard thing and share one BPL and one registry and you wouldn't have any ODR problems. In other words, it's not the fact that DLL D and DLL E register different conversions for class foo that causes the ODR problems; that just makes modules interact unfortunately (but in a deterministic and debuggable way). It's the workaround (loading multiple BPLs) that causes the actual ODR problems. So it sounds we agree that we should only ever have one BPL loaded. We just need to implement the registry so it can know which module DLL instance a particular registry lookup is coming from, whether that's using special module-instance IDs or compiling the registries into the module DLLs or something else. Is that right? Thanks! Jim _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
On 20 Sep 2011 at 12:38, Jim Bosch wrote:
> I'd also considered having a different set of template conversions that > are checked first for performance reasons, but I'd actually viewed the > override preference argument from the opposite direction - once a > template converter traits class has been fully specialized, you can't > specialize it again differently in another module (well, maybe symbol > visibility labels can get you out of that bind in practice). So it > seemed a registry-based override would be the only way to override a > template-based conversion, and hence the registry-based conversions > would have to go first. Ah, sorry, I didn't explain myself well at all. I've been doing a lot of work surrounding ISO C and C++ standards recently, so my head is kinda trapped in future C and C++. When I was speaking of ODR, I was kinda assuming that we have C++ modules available for newer compilers in the post C++-1x TR (see http://www.open- std.org/jtc1/sc22/wg21/docs/papers/2007/n2316.pdf) and we can emulate much of module support using -fvisibility=hidden on GCC. On MSVC, of course, you get module proto-support for free anyway due to how their DLLs work. You're absolutely correct that right now, outside of the Windows platform, ODR is a process wide problem in most compilers on their default settings. That's a PITA, so everyone is agreed that we ought to do something about it. The big problem is how far we ought to go, hence N2316 not making it into C++-1x and being pushed into TR. What I can say is that that TR will very likely be highly compatible with the Windows DLL system (and its GCC visibility near-equivalent) due to backwards compatibility. I would suggest that you code as if both are true as a reasonable proxy for future C++ module support. Then you're covered ten years down the line from now. > But overall I think your proposal to just try the templates first is > cleaner, because having multiple specializations of the same traits > class in different modules would be a problem either way; allowing users > to override the compile-time conversions with registry-based conversions > is at best a poor workaround. I know this is a little off-topic, but Boost could really do with a generic runtime type registry implementation. There are lots of use cases outside BPL and if we had one, highly extensible, properly written system it could be applied to lots of use cases. For example, Java-style automagical metaprogrammed C++ type reflection into SQL is perfectly possible. At the time I wrote it, it was the only example of it anywhere I could find (maybe things have since changed). It makes talking to databases super-easy at the cost of making the compiler work very hard. There are lots more use cases too e.g. talking with .NET, or Objective C. > > Just make sure what you do works with precompiled headers :) > > > > P.S.: This is trickier than it sounds. > > Yuck. Precompiled headers are something I've never dealt with before, > but I suppose I had better learn. Getting them working can make the difference between a several hour recompile and ten minutes. They're painful though due to compiler bugs. > > The same mechanism usefully also takes care of multiple python > > interpreters too. > > I have to admit I'm only barely following you here - threads are another > thing I don't deal with often. It sounds like you have a totally > different option from the ones I was anticipating. Could you explain in > more detail how this would work? Sure. You have the problem when working with Python of handling the GIL which is strongly related to what the "current" interpreter is. These are TLS items in python, so each thread has its own current setting. Therefore, what one really ought to have in BPL is something like: // normal C++ code ... // I want to call python code in interpreter X { boost::python::hold_interpreter interpreter_holder(X); // Replaces "current" interpreter with X boost::python::hold_GIL gil_holder(interpreter_holder); // Acquire the GIL for that interpreter call_some_BPL_or_python_function(); } // On scope exit gil_holder and interpreter_holder gets destroyed, thus releasing the GIL and resetting the "current" interpreter to whatever it was before // Back to normal C++ code This obviously refers to the embedded case, but it ought to be similar when BPL calls into C++: the "current" interpreter should be available per thread as a BPL object instance wrapping the python TLS config. Then a call into C++ can safely call into other interpreters. What's useful here of course is that you can keep a per-thread list of interpreter nestings. This means you can see exactly which module entered which interpreter and in which order, and therefore what to search and what to unwind when necessary. > An interesting idea - avoid trying all possible conversions a runtime > seems a very worthy goal, though I could also see this inflating the > size of the modules. Can you point me at anything existing for an example? The closest that I have publicly available is the SQL type reflection machinery in TnFOX. Have a look at the following: https://github.com/ned14/tnfox/blob/master/include/TnFXSQLDB.h https://github.com/ned14/tnfox/blob/master/include/TnFXSQLDB_ipc.h https://github.com/ned14/tnfox/blob/master/include/TnFXSQLDB_sqlite3.h Note that this is an entirely *static* type registry, so it exists exclusively in the compiler. It does happily extend into a dynamic registry however. I can supply the source which extends the TnFOX static registry with a dynamic runtime, but I'd need you to agree to an NDA and a promise not to distribute them. > If I understand your argument, it's not the global registry that causes > ODR violations - it's the fact that you're trying to mimic having local > registries by forcing distinct BPLs for each module, and that makes BPL > symbols ambiguous. If you had a pair of modules that were happy using > each other's converters, they would do the standard thing and share one > BPL and one registry and you wouldn't have any ODR problems. ODR is a C++ (and C) spec issue and has nothing to do with BPL per se. It's rather that because real world code routinely violates ODR that it becomes a problem for anything which operates a type registry. BTW code can't help violating it. Libraries have absolutely no control over what they must coexist with in a given process. > In other words, it's not the fact that DLL D and DLL E register > different conversions for class foo that causes the ODR problems; that > just makes modules interact unfortunately (but in a deterministic and > debuggable way). It's the workaround (loading multiple BPLs) that > causes the actual ODR problems. RTLD_GLOBAL operates okay for most C++ programs because that's the default. Indeed, until very recently, GCC couldn't throw exceptions properly unless RTLD_GLOBAL was set. Unfortunately, Python sets RTLD_LOCAL for the process because up until I patched GCC to add -fvisibility, there was no easy way to separate Python extension modules from one another. They routinely defined functions with identical symbols and therefore one got all sorts of unpleasant conflicts. One therefore gets a big problem when using anything C++ with a type registry within Python. One typically has to resort to unpleasant hacking of dlopen settings. > So it sounds we agree that we should only ever have one BPL loaded. We > just need to implement the registry so it can know which module DLL > instance a particular registry lookup is coming from, whether that's > using special module-instance IDs or compiling the registries into the > module DLLs or something else. > > Is that right? Ah, but it gets worse! You can't guarantee that BPL won't be loaded multiply anyway. For example, one might have dependencies on two separate versions of BPL, or some sublibrary might link a copy of BPL in statically. In fact, you can't even guarantee that there aren't multiple pythons running! One (nasty) way of implementing parallel python is to instantiate multiple pythons each with their own GIL and run them in separate threads. Of course, forking yourself is far saner. In the end though, BPL is a *library*. You have absolutely no control over what you're combined with, but you can try your best for most reasonable scenarios. I know this sounds tricky, but what you need is a design which copes with having one BPL loaded or many and/or one python loaded or many and/or one interpreter running or many. If you follow the system described above where each thread keeps a list of which BPL and python interpreter is "current", you now know which type registries to search in any given scenario. You can see most of an existing implementation of what I described above at: https://github.com/ned14/tnfox/blob/master/Python/FXPython.h https://github.com/ned14/tnfox/blob/master/Python/FXPython.cxx And oh, BTW, here is a very useful piece of C++ metaprogramming for BPL: https://github.com/ned14/tnfox/blob/master/Python/FXCodeToPythonCode.h This lets you handle a limitation in present BPL where you want to supply one of a list of python functions as a C callback function e.g. a comparison function for sorting. The metaprogramming generates a N member jump table and you supply a policy which thunks the C callback into Python. You can then install or deinstall python functions to the "slot" as it were and pass the appropriate C wrapper to the C callback function taking code. In other words, the metaprogramming generates a unique C function address for each unique Python function (for the possibilities supplied). This is extremely useful. Hope these help. If you have any questions, please do ask. I always felt it a shame I never had the time to port TnFOX extensions to BPL back into Boost, kinda wasted me writing it all as no one uses TnFOX :( Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Niall Douglas
on Tue Sep 20 2011, "Niall Douglas" <s_sourceforge-AT-nedprod.com> wrote: > On 19 Sep 2011 at 17:03, Jim Bosch wrote: > >> I'd like to see support for static, template-based conversions. These >> would be defined by [partial-]specializing a traits class, and I tend to >> think they should only be invoked after attempting all registry-based >> conversions. > > Surely not! You'd want to let template specialisaton be the first > point of call so the compiler can compile in obvious conversions, > *then* and only then do you go to a runtime registry. I don't understand why you guys would want compile-time converters at all, really. Frankly, I think they should all be eliminated. They complicate the model Boost.Python needs to support and cause confusion when the built-in ones mask runtime conversions. > This also lets one override the runtime registry when needed in the > local compiland. I'm not against having another set of template > specialisations do something should the first set of specialisations > fail, and/or the runtime registry lookup fails. There are better ways to deal with conversion specialization, IMO. The runtime registry should be scoped, and it should be possible to find the "nearest eligible converter" based on the python module hierarchy. >> Users would have to include the same headers in groups of >> interdependent modules to avoid specializing the same traits class >> multiple times in different ways; I can't think of a way to protect them >> from this, but template-based specializations are a sufficiently >> advanced featured that I'm comfortable leaving it up to users to avoid >> this problem. > > Just make sure what you do works with precompiled headers :) Another problem that you avoid by not supporting compile-time selection of converters. >> 3) Considering that we will have a "best-match" overloading system, what >> should take precedence, an inexact match in a module-specific registry, >> or an exact match in a global registry? (Clearly this is a moot point >> for to-Python conversion). Nearer scopes should mask more distant scopes. This is unfortunately necessary, or you get unpredictable results depending on the context in which you're running (all the other modules in the system). > Imagine the following. Program A loads DLL B and DLL C. DLL B is > dependent on DLL D which uses BPL. DLL C is dependent on DLL E which > uses BPL. Jeez, I'm going to have to graph this A / \ B C | | D E \ / BPL > DLL D tells BPL that class foo is implicitly convertible with an > integer. > > DLL E tells BPL that class foo is actually a thin wrapper for > std::string. > > Right now with present BPL, we have to load two copies of BPL, one > for DLL D and one for DLL E. They maintain separate type registries, > so all is good. That's not correct. Boost.Python was designed to deal with scenarios like this and be run as a single instance in such a system, with a single registry. > But what if DLL B returns a python function to Program A, which then > installs it as a callback with DLL C? OMG, could you make this more convoluted, please? > In the normal case, BPL code in DLL E will call into BPL code DLL D > and all is well. > > But what if the function in DLL D throws an exception? > > This gets converted into a C++ exception by throwing > boost::error_already_set. > > Now the C++ runtime must figure where to send the exception. But what > is the C++ runtime supposed to do with such an exception type? It > isn't allowed to see the copy of BPL living in DLL E, so it will fire > the exception type into DLL D where it doesn't belong. At this point, > the program will almost certainly segfault. Sorry, you completely lost me here. > As I mentioned earlier, this is a very semantically similar problem > to supporting multiple python interpreters anyway with each calling > into one another. How exactly is one python interpreter supposed to "call into" another one? Are you suggesting they have their own threads and one blocks to wait for the other, or is it something completely different. -- Dave Abrahams BoostPro Computing http://www.boostpro.com _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
On 10/05/2011 07:21 AM, Dave Abrahams wrote:
> on Tue Sep 20 2011, "Niall Douglas" <s_sourceforge-AT-nedprod.com> wrote: > >> On 19 Sep 2011 at 17:03, Jim Bosch wrote: >> >>> I'd like to see support for static, template-based conversions. These >>> would be defined by [partial-]specializing a traits class, and I tend to >>> think they should only be invoked after attempting all registry-based >>> conversions. >> Surely not! You'd want to let template specialisaton be the first >> point of call so the compiler can compile in obvious conversions, >> *then* and only then do you go to a runtime registry. > I don't understand why you guys would want compile-time converters at > all, really. Frankly, I think they should all be eliminated. They > complicate the model Boost.Python needs to support and cause confusion > when the built-in ones mask runtime conversions. Indeed, I never understood that myself. At the Python/C++ language boundary there is no such thing as "compile-time". >> This also lets one override the runtime registry when needed in the >> local compiland. I'm not against having another set of template >> specialisations do something should the first set of specialisations >> fail, and/or the runtime registry lookup fails. > There are better ways to deal with conversion specialization, IMO. The > runtime registry should be scoped, and it should be possible to find the > "nearest eligible converter" based on the python module hierarchy. ...combined with some hints users can add to their modules. Again, I think we should favor explicit conversion policy settings over implicit ones. Sorry, I haven't yet managed to find time to sketch this out in any detail. I hope to be able to do that to help with this project, though. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin... _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Dave Abrahams
On 10/05/2011 07:21 AM, Dave Abrahams wrote:
> > on Tue Sep 20 2011, "Niall Douglas"<s_sourceforge-AT-nedprod.com> wrote: > >> On 19 Sep 2011 at 17:03, Jim Bosch wrote: >> >>> I'd like to see support for static, template-based conversions. These >>> would be defined by [partial-]specializing a traits class, and I tend to >>> think they should only be invoked after attempting all registry-based >>> conversions. >> >> Surely not! You'd want to let template specialisaton be the first >> point of call so the compiler can compile in obvious conversions, >> *then* and only then do you go to a runtime registry. > > I don't understand why you guys would want compile-time converters at > all, really. Frankly, I think they should all be eliminated. They > complicate the model Boost.Python needs to support and cause confusion > when the built-in ones mask runtime conversions. > I have one (perhaps unusual) use case that's extremely important for me: I have a templated matrix/vector/array class, and I want to define converters between those types and numpy that work with any combination of template parameters. I can do that with compile-time converters, and after including the header everything just works. With runtime conversions, I have to explicitly declare all the template parameter combinations I intend to use. >> This also lets one override the runtime registry when needed in the >> local compiland. I'm not against having another set of template >> specialisations do something should the first set of specialisations >> fail, and/or the runtime registry lookup fails. > > There are better ways to deal with conversion specialization, IMO. The > runtime registry should be scoped, and it should be possible to find the > "nearest eligible converter" based on the python module hierarchy. > I think this might turn into something that approaches the same mass of complexity Niall describes, because a Python module can be imported into several places in a hierarchy at once, and it seems we'd have to track which instance of the module is active in order to resolve those scopes correctly. I do hope that most people won't mind if I don't implement something as completely general as what Niall has described - there is a lot of complexity there I think most users don't need, and I hope he'd be willing to help with that if he does need to deal with e.g. passing callbacks between multiple interpreters. But I'm also afraid he might be onto something in pointing out that fixing the more standard cases might already be more complicated than it seems. Jim _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Stefan Seefeld-2
On 10/05/2011 09:03 AM, Stefan Seefeld wrote:
> ...combined with some hints users can add to their modules. Again, I > think we should favor explicit conversion policy settings over implicit > ones. > > Sorry, I haven't yet managed to find time to sketch this out in any > detail. I hope to be able to do that to help with this project, though. > Unfortunately, I have to admit there's no rush - I have plenty of other things taking most of my time at the moment, so you're in no danger of being left out of the discussion by being busy. I am very curious to see exactly how you see this working, however; to me the notion of explicit conversions between modules seems to require the developer of one module to know too much about the internals of another. But I'm sure you've got your reasons. Jim _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Jim Bosch-2
On 10/05/2011 09:18 AM, Jim Bosch wrote:
> > I have one (perhaps unusual) use case that's extremely important for > me: I have a templated matrix/vector/array class, and I want to define > converters between those types and numpy that work with any > combination of template parameters. I can do that with compile-time > converters, and after including the header everything just works. > With runtime conversions, I have to explicitly declare all the > template parameter combinations I intend to use. Jim, I may be a little slow here, but I still don't see the issue. You need to export your classes to Python one at a time anyhow, i.e. not as a template, letting the Python runtime figure out all valid template argument permutations. So why can't the converter definitions simply be bound to those type definitions ? Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin... _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
On 10/05/2011 09:26 AM, Stefan Seefeld wrote:
> On 10/05/2011 09:18 AM, Jim Bosch wrote: >> >> I have one (perhaps unusual) use case that's extremely important for >> me: I have a templated matrix/vector/array class, and I want to define >> converters between those types and numpy that work with any >> combination of template parameters. I can do that with compile-time >> converters, and after including the header everything just works. >> With runtime conversions, I have to explicitly declare all the >> template parameter combinations I intend to use. > > Jim, > > I may be a little slow here, but I still don't see the issue. You need > to export your classes to Python one at a time anyhow, i.e. not as a > template, letting the Python runtime figure out all valid template > argument permutations. So why can't the converter definitions simply be > bound to those type definitions ? > The key point is that I'm not exporting these with "class_"; I define converters that go directly to and from numpy.ndarray. So if I define template-based converters for my class ("ndarray::Array<T,N,C>"), a function that takes one as an argument: void fillArray(ndarray::Array<double,2,1> array); ...can be wrapped to take a numpy.ndarray as an argument, just by doing: #include "array-from-python.hpp" ... bp::def("fillArray", &fillArray); Without template converters, I also have to add something like: register_array_from_python< ndarray::Array<double,2,1> >(); (where register_array_from_python is some custom runtime converter I'd have written) and repeat that for every instantiation of ndarray::Array I use. This involves looking through all my code, finding all the combinations of template parameters I use, and registering each one exactly once across all modules. That would get better with some sort of multi-module registry support, but I don't think I should have to declare the converters for each set of template parameters at all; it's better just to write a single compile-time converter. Jim _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Jim Bosch-2
on Wed Oct 05 2011, Jim Bosch <talljimbo-AT-gmail.com> wrote: > On 10/05/2011 07:21 AM, Dave Abrahams wrote: > >> I don't understand why you guys would want compile-time converters at >> all, really. Frankly, I think they should all be eliminated. They >> complicate the model Boost.Python needs to support and cause confusion >> when the built-in ones mask runtime conversions. >> > > I have one (perhaps unusual) use case that's extremely important for > me: I have a templated matrix/vector/array class, and I want to define > converters between those types and numpy that work with any > combination of template parameters. I can do that with compile-time > converters, and after including the header everything just works. Not really. In the end you can only expose particular specializations of the templates to Python, and you have to decide, somehow, what those are. > With runtime conversions, I have to explicitly declare all the > template parameter combinations I intend to use. Not really; a little metaprogramming makes it reasonably easy to generate all those combinations. You can also use compile-time triggers to register runtime converters. I'm happy to demonstrate if you like. >> There are better ways to deal with conversion specialization, IMO. The >> runtime registry should be scoped, and it should be possible to find the >> "nearest eligible converter" based on the python module hierarchy. > > I think this might turn into something that approaches the same mass > of complexity Niall describes, Nothing ever needs to be quite as complex as what Niall describes ;-) (no offense intended, Niall) > because a Python module can be imported into several places in a > hierarchy at once, and it seems we'd have to track which instance of > the module is active in order to resolve those scopes correctly. Meh. I think a module has an official identity, its __name__. > I do hope that most people won't mind if I don't implement something > as completely general as what Niall has described No problem. As the original author I think you should give what I describe a little more weight in this discussion, though ;-) > - there is a lot of complexity there I think most users don't need, > and I hope he'd be willing to help with that if he does need to deal > with e.g. passing callbacks between multiple interpreters. But I'm > also afraid he might be onto something in pointing out that fixing the > more standard cases might already be more complicated than it seems. Don't let him scare you off. He's a very smart guy, and a good guy, but he tends to describe things in a way that I find to be needlessly daunting. -- Dave Abrahams BoostPro Computing http://www.boostpro.com _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Dave Abrahams
On 5 Oct 2011 at 7:21, Dave Abrahams wrote:
> >> I'd like to see support for static, template-based conversions. These > >> would be defined by [partial-]specializing a traits class, and I tend to > >> think they should only be invoked after attempting all registry-based > >> conversions. > > > > Surely not! You'd want to let template specialisaton be the first > > point of call so the compiler can compile in obvious conversions, > > *then* and only then do you go to a runtime registry. > > I don't understand why you guys would want compile-time converters at > all, really. Frankly, I think they should all be eliminated. They > complicate the model Boost.Python needs to support and cause confusion > when the built-in ones mask runtime conversions. What I was proposing was that the compile-time registry is identical to the runtime registry. Hence the order of lookup so a lot of the simpler conversion can be done inline by the compiler. Sure, the same system can be abused to have special per-compiland behaviours. I personally have found that rather useful for working around very special situations such as compiler bugs. I agree that you shouldn't have two separate systems, and 99% of the time both registries need to do the same thing. In my own code in fact I have a lot of unit tests ensuring that the compile-time and run-time registries behave identically. > > Imagine the following. Program A loads DLL B and DLL C. DLL B is > > dependent on DLL D which uses BPL. DLL C is dependent on DLL E which > > uses BPL. > > Jeez, I'm going to have to graph this > > A > / \ > B C > | | > D E > \ / > BPL You can't guarantee that Dave. It depends on what flags to dlopen the end user uses. And right now, Python itself defaults to multiple BPLs. > > Right now with present BPL, we have to load two copies of BPL, one > > for DLL D and one for DLL E. They maintain separate type registries, > > so all is good. > > That's not correct. Boost.Python was designed to deal with scenarios > like this and be run as a single instance in such a system, with a > single registry. http://muttley.hates-software.com/2006/01/25/c37456e6.html There are plenty more all over the net. > > But what if DLL B returns a python function to Program A, which then > > installs it as a callback with DLL C? > > OMG, could you make this more convoluted, please? No, it's a valid use case. Again, search google and you'll see. Lots of people with this same problem. > > As I mentioned earlier, this is a very semantically similar problem > > to supporting multiple python interpreters anyway with each calling > > into one another. > > How exactly is one python interpreter supposed to "call into" another > one? Are you suggesting they have their own threads and one blocks to > wait for the other, or is it something completely different. Right now BPL doesn't touch the GIL or current interpreter context. I'm saying it ought to manage both, because getting it right isn't obvious. And once again, if program A causes the loading of two DLLs each of which runs its own python interpreter, you can get all sorts of unfun when the two interpreters call into one another. Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Jim Bosch-2
On 5 Oct 2011 at 9:18, Jim Bosch wrote:
> I think this might turn into something that approaches the same mass of > complexity Niall describes, because a Python module can be imported into > several places in a hierarchy at once, and it seems we'd have to track > which instance of the module is active in order to resolve those scopes > correctly. > > I do hope that most people won't mind if I don't implement something as > completely general as what Niall has described - there is a lot of > complexity there I think most users don't need, and I hope he'd be > willing to help with that if he does need to deal with e.g. passing > callbacks between multiple interpreters. But I'm also afraid he might > be onto something in pointing out that fixing the more standard cases > might already be more complicated than it seems. It's really not that complex when implemented, honestly. It's just complex creating that simple design to cover all the possible use cases. Once the design is down, you'd be amazed at how little code it turns into. Obviously Jim, you're the one who's implementing it, so you do what you like. However, I would suggest that you might consider setting up a wiki page on Boost's trac (https://svn.boost.org/trac/boost/ ?) describing the proposed design in detail. I'm also happy to offer a full project management host for your efforts on ned Productions' Redmine site (http://www.nedproductions.biz/redmine/) if you'd prefer. You'd get your own full self-contained project there. In either case, I'm sure people from the list here would be happy to comment and/or contribute to the design document even if they are unable to contribute code. Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Dave Abrahams
On 5 Oct 2011 at 11:30, Dave Abrahams wrote:
> > I think this might turn into something that approaches the same mass > > of complexity Niall describes, > > Nothing ever needs to be quite as complex as what Niall describes ;-) > > (no offense intended, Niall) And here I am thinking I am clear as bell! :) No offence taken at all Dave. I often find your thinking confusing too. We just don't think similarly, but that's likely a good thing. [BTW, I have a small book shortly going on sale early December outlining my personal recommendations on how to make human civilisation sustainable. If you think my coding stuff hurts the head, I am told that said book is unbelievably complex. Can't see why myself :)] > > because a Python module can be imported into several places in a > > hierarchy at once, and it seems we'd have to track which instance of > > the module is active in order to resolve those scopes correctly. > > Meh. I think a module has an official identity, its __name__. And version and current state. It's like how a single piece of code can have multiple identities because multiple threads and processes can execute it. > Don't let him scare you off. He's a very smart guy, and a good guy, but > he tends to describe things in a way that I find to be needlessly > daunting. Thank you Dave. I actually didn't know you had an opinion on me and I am genuinely both surprised and pleased. Your opinion I take seriously. I hope you keep your high opinion when you see me on ISO SC22 (I hopefully will be becoming the Irish representative for ISO later this month). I agree entirely with Dave - don't let me scare you off! What you're doing Jim is great and keep at it. Do what you feel is best, in the end it's your code and your time. Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Niall Douglas
On 10/05/2011 03:28 PM, Niall Douglas wrote:
> On 5 Oct 2011 at 9:18, Jim Bosch wrote: > >> I think this might turn into something that approaches the same mass of >> complexity Niall describes, because a Python module can be imported into >> several places in a hierarchy at once, and it seems we'd have to track >> which instance of the module is active in order to resolve those scopes >> correctly. >> >> I do hope that most people won't mind if I don't implement something as >> completely general as what Niall has described - there is a lot of >> complexity there I think most users don't need, and I hope he'd be >> willing to help with that if he does need to deal with e.g. passing >> callbacks between multiple interpreters. But I'm also afraid he might >> be onto something in pointing out that fixing the more standard cases >> might already be more complicated than it seems. > > It's really not that complex when implemented, honestly. It's just > complex creating that simple design to cover all the possible use > cases. Once the design is down, you'd be amazed at how little code it > turns into. > > Obviously Jim, you're the one who's implementing it, so you do what > you like. However, I would suggest that you might consider setting up > a wiki page on Boost's trac (https://svn.boost.org/trac/boost/ ?) > describing the proposed design in detail. > > I'm also happy to offer a full project management host for your > efforts on ned Productions' Redmine site > (http://www.nedproductions.biz/redmine/) if you'd prefer. You'd get > your own full self-contained project there. > Thanks for the suggestion and the offer. I should probably just go with getting a boost trac account; there are some aspects of trac I dislike, but it's also what I know, and I very much doubt my needs will exceed its abilities in this case. But this is indeed approaching the point where we need a concrete straw-man to pummel. > In either case, I'm sure people from the list here would be happy to > comment and/or contribute to the design document even if they are > unable to contribute code. Good to hear! Jim _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Dave Abrahams
On 10/05/2011 11:30 AM, Dave Abrahams wrote:
> > on Wed Oct 05 2011, Jim Bosch<talljimbo-AT-gmail.com> wrote: > >> On 10/05/2011 07:21 AM, Dave Abrahams wrote: >> >>> I don't understand why you guys would want compile-time converters at >>> all, really. Frankly, I think they should all be eliminated. They >>> complicate the model Boost.Python needs to support and cause confusion >>> when the built-in ones mask runtime conversions. >>> >> >> I have one (perhaps unusual) use case that's extremely important for >> me: I have a templated matrix/vector/array class, and I want to define >> converters between those types and numpy that work with any >> combination of template parameters. I can do that with compile-time >> converters, and after including the header everything just works. > > Not really. In the end you can only expose particular specializations > of the templates to Python, and you have to decide, somehow, what those > are. > >> With runtime conversions, I have to explicitly declare all the >> template parameter combinations I intend to use. > > Not really; a little metaprogramming makes it reasonably easy to > generate all those combinations. You can also use compile-time triggers > to register runtime converters. I'm happy to demonstrate if you like. > The latter sounds more like what I'd want, though a brief demonstration would be great. You're right in guessing that I don't really care whether it's a runtime or compile-time conversion. The key is that I don't want to have to explicitly declare the conversions, even if I have some metaprogramming to make that easier - I'd like to only declare what's actually used, since that's potentially a much smaller number of declarations. >>> There are better ways to deal with conversion specialization, IMO. The >>> runtime registry should be scoped, and it should be possible to find the >>> "nearest eligible converter" based on the python module hierarchy. >> >> I think this might turn into something that approaches the same mass >> of complexity Niall describes, > > Nothing ever needs to be quite as complex as what Niall describes ;-) > > (no offense intended, Niall) > >> because a Python module can be imported into several places in a >> hierarchy at once, and it seems we'd have to track which instance of >> the module is active in order to resolve those scopes correctly. > > Meh. I think a module has an official identity, its __name__. > >> I do hope that most people won't mind if I don't implement something >> as completely general as what Niall has described > > No problem. As the original author I think you should give what I > describe a little more weight in this discussion, though ;-) > Doing something that's only a small modification to the current single-registry model is also very appealing from an ease-of-implementation standpoint too, and it would also be sufficient for my own needs. I'd like to see what Stefan's ideas are first, of course, and I should take a look at some of the code Niall has pointed me at to see if I can take some steps towards a design that would meet his needs as well. But at the moment I'm inclined to go with something pretty similar to the current design to keep this problem from overshadowing and swallowing all the other things I'd like to go into the upgrade. _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Niall Douglas
on Wed Oct 05 2011, "Niall Douglas" <s_sourceforge-AT-nedprod.com> wrote: > On 5 Oct 2011 at 7:21, Dave Abrahams wrote: > > What I was proposing was that the compile-time registry is identical > to the runtime registry. I don't even know what that means. > Hence the order of lookup so a lot of the simpler conversion can be > done inline by the compiler. But AFAICT there's really almost no advantage in that, and it adds special cases to the model. > Sure, the same system can be abused to have special per-compiland > behaviours. That's fine; a scoped registry would allow the same thing. When you have a bunch of independently-developed modules flying around it's more than likely that there will be "ODR violations" across different modules, and that should be OK as long as they don't try to exchange those types. > I personally have found that rather useful for working around very > special situations such as compiler bugs. I agree that you shouldn't > have two separate systems, and 99% of the time both registries need to > do the same thing. In my own code in fact I have a lot of unit tests > ensuring that the compile-time and run-time registries behave > identically. > >> > Imagine the following. Program A loads DLL B and DLL C. DLL B is >> > dependent on DLL D which uses BPL. DLL C is dependent on DLL E which >> > uses BPL. >> >> Jeez, I'm going to have to graph this >> >> A >> / \ >> B C >> | | >> D E >> \ / >> BPL > > You can't guarantee that Dave. It depends on what flags to dlopen the > end user uses. And on the OS, and on what order things are loaded in. I wasn't trying to make an assertion, just trying to picture what you were describing. > And right now, Python itself defaults to multiple BPLs. I wouldn't put it that way, not at all. Again, what happens depends on the platform and a lot of other factors. >> > Right now with present BPL, we have to load two copies of BPL, one >> > for DLL D and one for DLL E. They maintain separate type registries, >> > so all is good. >> >> That's not correct. Boost.Python was designed to deal with scenarios >> like this and be run as a single instance in such a system, with a >> single registry. > > http://muttley.hates-software.com/2006/01/25/c37456e6.html > > There are plenty more all over the net. Believe me, I'm fully aware of those problems, and note that your reference doesn't mention Boost at all. I happen to know that this very large project out of Lawrence Berekely Labs has been successfully using Boost.Python in multi-module setups with a single instance of the library, across many different platforms, for years: http://cctbx.sourceforge.net/ You should also take a look at this whole thread http://gcc.gnu.org/ml/gcc/2002-05/msg02945.html if you want to have a clear sense of some of the issues. >> > But what if DLL B returns a python function to Program A, which then >> > installs it as a callback with DLL C? >> >> OMG, could you make this more convoluted, please? > > No, it's a valid use case. Again, search google and you'll see. Lots > of people with this same problem. I'm sure it's a valid use case, and I'm also sure you can illustrate whatever problem you're describing with no more than two Boost.Python modules. >> > As I mentioned earlier, this is a very semantically similar problem >> > to supporting multiple python interpreters anyway with each calling >> > into one another. >> >> How exactly is one python interpreter supposed to "call into" another >> one? Are you suggesting they have their own threads and one blocks to >> wait for the other, or is it something completely different. > > Right now BPL doesn't touch the GIL or current interpreter context. > > I'm saying it ought to manage both, because getting it right isn't > obvious. Sure. > And once again, if program A causes the loading of two DLLs each of > which runs its own python interpreter, you can get all sorts of unfun > when the two interpreters call into one another. Again, what does it mean for one interpreter to "call into another"? -- Dave Abrahams BoostPro Computing http://www.boostpro.com _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
In reply to this post by Jim Bosch-2
on Wed Oct 05 2011, Jim Bosch <talljimbo-AT-gmail.com> wrote: > On 10/05/2011 11:30 AM, Dave Abrahams wrote: > >> on Wed Oct 05 2011, Jim Bosch<talljimbo-AT-gmail.com> wrote: >> >>> With runtime conversions, I have to explicitly declare all the >>> template parameter combinations I intend to use. >> >> Not really; a little metaprogramming makes it reasonably easy to >> generate all those combinations. You can also use compile-time triggers >> to register runtime converters. I'm happy to demonstrate if you like. > > The latter sounds more like what I'd want, though a brief > demonstration would be great. You're right in guessing that I don't > really care whether it's a runtime or compile-time conversion. The > key is that I don't want to have to explicitly declare the > conversions, even if I have some metaprogramming to make that easier - > I'd like to only declare what's actually used, since that's > potentially a much smaller number of declarations. I'm not sure exactly what you have in mind when you say "declare what's actually used," because you haven't said what counts as "usage." That said, I can give you an abstract description. It's just a matter of being able to "hitch a ride" at its point-of-use: arrange for some customization point to be instantiated during "use" that you can customize elsewhere. That customization then is where you register the type. For example, let's imagine that by saying a type T is used you mean it's a parameter or return value of a wrapped function. Then you might designate this class template to be instantiated and its constructor called: namespace boost { namespace python { namespace user_hooks { // users are encouraged to specialize templates in this namespace template <class T> struct is_used { is_used() { /* do nothing by default */ } }; }}} In A user's wrapping code, she could make a partial specialization of this class template: namespace boost { namespace python { namespace user_hooks { template <class U1, class U2, class U3> struct is_used<my_template<U1,U2,U3> > { is_used() { my_register_converter<U1,U2,U3>(); } }; }}} The other way to make customization points like this uses argument dependent lookup and is usually less verbose, though brings with it other sticky problems that you probably want to avoid. > Doing something that's only a small modification to the current > single-registry model is also very appealing from an > ease-of-implementation standpoint too, and it would also be sufficient > for my own needs. > > I'd like to see what Stefan's ideas are first, of course, and I should > take a look at some of the code Niall has pointed me at to see if I > can take some steps towards a design that would meet his needs as > well. But at the moment I'm inclined to go with something pretty > similar to the current design to keep this problem from overshadowing > and swallowing all the other things I'd like to go into the upgrade. Good idea. -- Dave Abrahams BoostPro Computing http://www.boostpro.com _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig |
Free forum by Nabble | Edit this page |