-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Labels
bugSomething isn't workingSomething isn't working
Description
The specification for encoding "binary values" says:
If the input data is not evenly divisible into 40-bit segments, the encoder must pad the beginning of the input data with enough zeroes/nulls to fit the 40-bit segment boundary; for example:
Binary: DE AD BE EF CA FE BA BE Padded: 00 00 DE AD BE | EF CA FE BA BE Decimal: 14593470 | 1029902875326 Base32H: 000D-XBDY | XZ5F-XEMY
So the correct output for encoding the 3 bytes DE AD BE should be 000DXBDY and the 3 padding zeros must be there.
However, encoding the decimal number 14593470 should result in DXBDY.
Why is the logic here different? Why not just remove the zero padding from the binary encoding?
It seems very natural to implement a numeric encoder given a binary encoder as follows:
# Given
def encode_binary(v: bytes) -> str:
...
def encode_numeric(v: int) -> str:
bs = to_big_endian(v)
return encode_binary(bs)but this will give the wrong (zero-padded) output.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working