Bit can contains value 0 or 1.
If we group 8 bits it is byte.
The all other data types have resolution in multiple of byte: 16 bits, 32 bits, 64 bits etc.
From computer point of view they are always binary.
If typical computer application must print number to user, it has to convert binary to decimal (for non-programmers). It is just on screen. Internally it is is still binary. Conversion takes CPU time and memory.
In older times sometimes was used BCD. Now I think it is obsolete.
https://en.m.wikipedia.org/wiki/Binary-coded_decimal
It could be found in e.g. embedded systems. Display and basic maths are easier and takes less time than full binary to decimal conversion (because they need multiplication, division and/or modulo.. which maybe are not implemented in cheaper weaker chips e.g. Atari/C-64 CPU Motorola 6502/6510 don't have built-in multiplication, division and modulo instructions).
CPU has flags: zero, overflow, carry, nagative and others.
Some CPU instructions modify some of these flags. Usually arithmetic instructions.
If you subtract e.g. variable x0 from x1 (register from register/memory) and you get zero result there is set zero flag in CPU. What does it mean? That x0 was equal to x1. Therefore instructions to perform jumps are called je, jne, beq and bne. Shortcuts from Jump if Equal. Jump if Not Equal. Branch if EQual and Branch if Not Equal. They test state of zero flag in CPU.
It is what is used by higher level languages if()/for()/while() etc. functions. And by Boolean to check its state and perform jump.
What you see on the screen is just bunch of pixels with shape of digit.