2010-05-12
Data stream decoding now completely works for all encoding types - existing or future.
Extending the supported encoding types is a straightforward process from now on, with only the encoding specific streaming layer affected - i.e. adding GCR would require a GCR encoding layer added; the rest of the library code should be completely unaffected. This is very important as testing the whole library for all edge cases is extremely time consuming.
Blocks can be decoded (from IPF) and encoded (any encoding format, like MFM, GCR, etc) as is; gaps using gap streams could use similar code.
All decoding/encoding algorithms use new data structures to avoid any sort of dependency (and related bugs) on legacy code.
The old data structures will be converted to the new ones when track decoding is requested, but a new interface could also be provided that would allow the control of all the advanced decoding features available - those will get either disabled or set to to simulate the old library by default for old images that can’t possibly rely on new behaviour.
Now, that the core functionality is complete and tested, existing library features will have to be ported to use the new core.
Still to do:
Since the library core now supports a significantly enhanced *generic and encoding agnostic* model and capabilities and the code is very clean, we might consider replacing some IPF images with ones using the new functionality - so unmaintainable legacy code could be discarded. If this happens, images that would require supporting very obscure legacy functionality, will be replaced.