I read in this assembly programming tutorial that 8 bits are used for data while 1 bit is for parity, which is then used for detecting parity error (caused by hardware fault or electrical disturbance).
Is this true?
|
I read in this assembly programming tutorial that 8 bits are used for data while 1 bit is for parity, which is then used for detecting parity error (caused by hardware fault or electrical disturbance). Is this true? |
|||||||||||||||||||||
|
|
A byte of data is eight bits, there may be more bits per byte of data that are used at the OS or even the hardware level for error checking (parity bit, or even a more advanced error detection scheme), but the data is eight bits and any parity bit is usually invisible to the software. A byte has been standardized to mean 'eight bits of data'. The text isn't wrong in saying there may be more bits dedicated to storing a byte of data of than the eight bits of data, but those aren't typically considered part of the byte per se, the text itself points to this fact. You can see this in the following section of the tutorial:
4*8=32, it might actually take up 36 bits on the system but for your intents and purposes it's only 32 bits. |
|||||||||||||||||
|
|
Traditionally, a byte can be any size, and is just the smallest addressable unit of memory. These days, 8 bit bytes have pretty much been standardized for software. As JustAnotherSoul said, the hardware may store more bits than the 8 bits of data. If you're working on programmable logic devices, like FPGAs, you might see that their internal memory is often addressable as 9-bit chunks, and as the HDL author, you could use that 9th bit for error checking or just to store larger amounts of data per "byte". When buying memory chips for custom hardware, you generally have the choice of 8 or 9 bit addressable units (or 16/18, 32/36, etc), and then it is up to you whether you have 9 bit "bytes" and what you do with that 9th bit if you choose to have it. |
|||||||||||||||||
|
|
That text is extremely poorly worded. He is almost certainly talking about ECC (error-correcting code) RAM. ECC ram will commonly store 8-bits worth of information using 9-bits. The extra bit is used to store error correction codes.
This is all completely invisible to users of the hardware. In both cases, software using this RAM sees 8 bits per byte. As an aside: error-correcting codes in RAM typically aren't actually 1 bit per byte; they're instead 8 bits per 8 bytes. This has the same space overhead, but has some additional advantages. See SECDED for more info. |
|||||||||||||
|
|
Generally speaking, the short answer is that a byte is 8 bits. This oversimplifies the matter (sometimes even to the point of inaccuracy), but is the definition most people (including a large number of programmers) are familiar with, and the definition nearly everyone defaults to (regardless of how many differently-sized bytes they've had to work with). More specifically, a byte is the smallest addressable memory unit for the given architecture, and is generally large enough to hold a single text character. On most modern architectures, a byte is defined as 8 bits; ISO/IEC 80000-13 also specifies that a byte is 8 bits, as does popular consensus (meaning that if you're talking about, say, 9-bit bytes, you're going to run into a lot of trouble unless you explicitly state that you don't mean normal bytes). However, there are exceptions to this rule. For example:
So, in most cases, a byte will generally be 8 bits. If not, it's probably 9 bits, and may or may not be part of a 36-bit word. |
|||
|
|
|
A byte is usually defined as the smallest individually addressable unit of memory space. It can be any size. There have been architectures with byte sizes anywhere between 6 and 9 bits, maybe even bigger. There are also architectures where the only addressable unit is the size of the bus, on such architectures we can either say that they simply have no byte, or the byte is the same size as the word (in one particular case I know of that would be 32 bit); either way, it is definitely not 8 bit. Likewise, there are bit-addressable architectures, on those architectures, we could again argue that bytes simply don't exist, or we could argue that bytes are 1 bit; either way is a sensible definition, but 8 bit is definitely wrong. On many mainstream general purpose architectures, one byte contains 8 bit. However, that is not guaranteed. The further away you stray from the mainstream and/or from general purpose CPUs, the more likely you will encounter non-8-bit-bytes. This goes so far that some highly-portable software even makes the size configurable. E.g. older versions of GCC contained a macro called If you really want to stress that you are talking about an exact amount of 8 bit rather than the smallest addressable amount of memory, however large that may be, you can use the term octet, which is for example used in many newer RfCs. |
|||||
|
|
Note that the term byte is not well-defined without context. As far as computer architectures are concerned, you can assume that a byte is 8-bit, at least for modern architectures. This was largely standardised by programming languages such as C, which required bytes to have at least 8 bits but didn't provide any guarantees for larger bytes, making 8 bits per byte the only safe assumption. There are computers with addressable units larger than 8 bits (usually 16 or 32), but those units are usually called machine words, not bytes. For example, a DSP with 32K 32-bit RAM words would be advertised as having 128 KB or RAM, not 32 KB. Things are not so well-defined when it comes to communication standards. ASCII is still widely used, and it has 7-bit bytes (which nicely fit in 8-bit bytes on computers). UART transceivers are still produced to have configurable byte size (usually, you get to pick at least between 6, 7 and 8 bits per byte, but 5 and 9 are not unheard of). |
|||
|
|
|
First, the tutorial that you are referencing seems to be quite outdated, and seems to be directed at outdated versions of x86 processors, without stating it, so lots of the things you read there will not be understood by others (for example if you claim that a WORD is 2 bytes, people will either not know what you are talking about, or they will know that you have been taught based on very outdated x86 processors and will know what to expect). A byte is whatever number of bits someone decides it should be. It could be 8 bit, or 9 bit, or 16 bit, anything. In 2016, in most cases a byte will be eight bit. To be safe you can use the term octet - an octet is always, always, eight bits. The real confusion here is confusing two questions: 1. What is the number of bits in a byte? 2. If I wanted to transfer one byte from one place to another, or if I wanted to store a byte, using practical physical means, how would I do that? The second question is usually of little interest to you, unless you work at a company making modems, or hard drives, or SSD drives. In practice you are interested in the first question, and for the second one you just say "well, someone looks after that". The parity bit that was mentioned is a primitive mechanism that helps detecting that when a byte is stored in memory, and later the byte is read, the memory has changed by some accident. It's not very good at that, because it won't find that two bits have been changed so a change is likely to go undetected, and it cannot recover from the problem because there is no way to find out which of the 8 bits have changed, or even if the parity bit has changed. Parity bits are practically not used in that primitive form. Data that is stored permanently is usually protected in more complicated ways, for example by adding a 32 bit or longer checksum to a block of 1024 bytes - which takes much less extra space (0.4% in this example instead of 12.5%) and is much less likely to not find out when something is wrong. |
|||
|
|