Bit
The bit is the most basic unit of information in computing and digital communication. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either " or ""}}, but other representations such as ''true''/''false'', ''yes''/''no'', ''on''/''off'', or ''+''/''−'' are also widely used.
The relation between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. It may be physically implemented with a two-state device.
A contiguous group of binary digits is commonly called a ''bit string'', a bit vector, or a single-dimensional (or multi-dimensional) ''bit array''. A group of eight bits is called one ''byte'', but historically the size of the byte is not strictly defined. Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. A string of four bits is usually a ''nibble''.
In information theory, one bit is the information entropy of a random binary variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known. As a unit of information or negentropy, the bit is also known as a ''shannon'', named after Claude E. Shannon. As a measure of the length of a digital string that is encoded as symbols over a 0-1 (binary) alphabet, the bit has been called a binit, but this usage is now rare.
In data compression, the goal is to find a shorter representation for a string, so that it requires fewer bits during storage or transmission — but it must be "compressed" before doing so, and then "decompressed" after. The field of Algorithmic Information Theory is devoted to the study of the "irreducible information content" of a string (i.e. its shortest-possible representation length, in bits), under the assumption that the receiver has minimal ''a priori'' knowledge of the method used to compress the string. In error detection and correction, the goal is to add redundant data to a string, to enable the detection and/or correction of errors during storage or transmission — but the redundant data has to be computed before doing so, and then "checked" or "corrected" after.
The symbol for the binary digit is either "bit", per the IEC 80000-13:2008 standard, or the lowercase character "b", per the IEEE 1541-2002 standard. Use of the latter may create confusion with the capital "B" which is the international standard symbol for the byte. Provided by Wikipedia
-
1