[boost] [testing] QCC causing huge numbers of failures

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

[boost] [testing] QCC causing huge numbers of failures

David Abrahams

It looks like QCC got added to the regression system and now I am
getting huge regression reports with the Python library failing every
test.  

I'm going to ramble here, because I don't really know what or who to
lash out at ;).  So apologies in advance if it seems like I'm firing
indiscriminately.  I hope we can ultimately make things better.

With all due gratitude to Doug for setting it up, I have a hard time
not concluding that there's something wrong with the regression nanny
system.  The psychological impact seems misdirected.  I think the goal
is that a sudden crop of failures showing up in my mailbox should be
seen as a problem I need to address.  Too often, though, there's
nothing I can do about such failures and they get ignored.  In this
case, it's just really annoying.  These aren't regressions because
Boost.Python never worked with QNX in the past.  Why am I getting
these reports?

Shouldn't whoever enabled these tests have done something to ensure
that they wouldn't cause this to happen to "innocent" developers?


--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Reply | Threaded
Open this post in threaded view
|

Re: [boost] [testing] QCC causing huge numbers of failures

Jim Douglas-2
David Abrahams wrote:
> It looks like QCC got added to the regression system and now I am
> getting huge regression reports with the Python library failing every
> test.  
>
> I'm going to ramble here, because I don't really know what or who to
> lash out at ;).  

Guilty as charged...

So apologies in advance if it seems like I'm firing
> indiscriminately.  I hope we can ultimately make things better.

The buckshot approach often works :-)

> With all due gratitude to Doug for setting it up, I have a hard time
> not concluding that there's something wrong with the regression nanny
> system.  The psychological impact seems misdirected.  I think the goal
> is that a sudden crop of failures showing up in my mailbox should be
> seen as a problem I need to address.  

And they also show up in the NG which I find useful. And anyway, as I am
the QNX platform maintainer, an e-mail to me would soon elicit an
explanation.

> Too often, though, there's
> nothing I can do about such failures and they get ignored.  In this
> case, it's just really annoying.  These aren't regressions because
> Boost.Python never worked with QNX in the past.  Why am I getting
> these reports?

I added QNX6 to the "required" list a few days ago and I am now slowly
working trough the test failures. We have a solution for Boost.Python,
but just haven't implemented it as yet. I was hoping for time to achieve
  something optimal, but the rush towards 1.34 means that it will be
more of a kludge. I promise it will get done early next week.

If you check back I think you should find a gradual improvement for qcc.

> Shouldn't whoever enabled these tests have done something to ensure
> that they wouldn't cause this to happen to "innocent" developers?

Blame the new kid on the block then :-)

Jim

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost