26 September 2016

Bits and Bytes


Bits and bytes are computer terms that have come into fairly common usage.

The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer. For this reason, it is the smallest addressable unit of memory in many computer architectures.

The bit is a basic unit of information. It can have only one of two values, and is most commonly represented as either a 0 or 1.

The term bit is a portmanteau of binary digit.

The story of the bit began in 1948, when Bell Labs in New Jersey invented the transistor. A young engineer named Claude Shannon published  “A Mathematical Theory of Communication” and he included "bit" as a new fundamental "unit for measuring information.”

At that time, it must have seemed strange to consider "information" to be measurable and quantifiable.

Shannon was entering a field that didn't exist and that he would christen “information theory.”

The term byte was coined by Werner Buchholz in July 1956, during the early design phase for the IBM Stretch computer. The term is a deliberate alternate spelling of bite to avoid accidental confusion or mutation to bit.

Early computers used a variety of four-bit and six-bit binary coded decimal representations for printable graphic patterns. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code. This was an expansion of their six-bit code used in earlier card punches. The System/360 became prominent and led to ubiquitous adoption of the eight-bit storage size.

No comments:

Post a Comment

All comments need to be approved by the admins. Spam will be deleted.