Duda Jarek Posted July 28, 2008 Share Posted July 28, 2008 (edited) Standard data correction methods, has some maximal number of errors they can correct. For example Hamming (7,4) uses 3 additional checksum bits for 4 bits of information - it works fine when there is at most 1 error per block of 7 bits. We use quite large redundancy, but is it safe now? The problem is with pessimistic cases - if expected error rate is 1/100 bits, still quite often it can happen that we have 2 errors in some of 7 bit blocks. I would like to propose some statistical approach to data correction, which allow to protect against such pessimistic cases. Thanks of this we can for example reduce redundancy to achieve similar safeness. The trick is to use a very precise coding - such that any error would make that the following decoded sequence should be completely random 0/1 sequence (p(0)=p(1)=1/2). For example a block ciphers, which uses previous block to calculate the following one, but there is much better coding for it I will say later about. Now - add to the information some easily recognizable redundancy - for example insert '1' between each digit. If while decoding it occurs that there is '0' in one of these places - that means we had some error before. Knowing statistical characteristics of expected errors, we can make list of most possible errors in such cases, ordered by their possibility - on the top of this list there should be 'switched previous bit', ... after a while there can appear 'switched two bits:...'. This list can be very large. Now if we know that there (nearby) appeared some error, we take this list position by position, correct as it was really this case (switch some bits) and try to decode further fixed number of bits (a few dozens). If everything is ok - we get only '1' on selected positions - we can assume that it was this error. If not - try the next one from the list. This list can be generated online - using large amount of time we could repair even badly damaged transmission. While creating the list, we have to remember that errors can appear also in succeeding bits. Using block ciphers is a bit nasty - slow, we have large blocks to find errors ... There is new coding just ideal for above purpose - Asymmetric Numeral Systems (ANS) - new entropy coder, which has very nice properties for cryptography ... and data correction - it's much faster than block ciphers and uses small blocks of various length. Here for example is demonstration about it: http://demonstrations.wolfram.com/DataCompressionUsingAsymmetricNumeralSystems/ What do You think about it? Edited July 28, 2008 by Duda Jarek Link to comment Share on other sites More sharing options...
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now