I'm trying to use boost_iostream in a way that is probably not intented.
My use case is the following, I have a C++ stream. I want to read its
first bytes to determine if its gzip or Bzip2 or none of those, putback
the characters to the stream and then create the appropriate pipeline to
read the stream. Ideally, I'd then restart the procedure, so that I can
incrementally build a pipeline based on the successive magic numbers
read at the begining of the stream after different level of decoding.
Imagine a file that has been gzipped and uuencoded. Let's call its
Reading the first bytes of ifs, I would detect that it's an uuencoded
file, so I construct a pipeline ifs1 = ifs | uudecode.
Reading the first bytes of ifs1, I would detect that it's a gzipped
file, and will construct a new pipeline ifs2 = ifs | uuencode | gzip
(the reverse ordering for reading pipelines does not help here, but I
can manage that).
And so on....
Unfortunately, this does not work as I'm expecting it.
The attached code is a small example showing the problem. When, reading
the stream after having pushed the gzip filter (line 23), I get an
Digging a little with gdb seems to show that the putback buffer is
somehow lost, so the gzip filter does not read the proper header and
hence the exception. The raw in.read does not throw exceptions but
returns random bytes after the first 1f....
The second read correctly reads the character put back (1f), but the
second putback triggers an exception (putback buffer full). If the first
read/putback is done directly on file, everything works as expected.
I'm really stuck at this point, so any help would be welcome....
I hope there is a way to use boost_iostreams to achieve what I want to do.
Thank's in advance for any clue on how to solve this...