What was the main purpose of bitshift instructions in CPU?
As far as I know, even simple RISC microcontroller have a bitshift operator, and honestly I had to use it only once when I had to compute a division on a MCU that could not do divisions in the ALU.
Were the bitshift operators included so it was possible, even with very simple ALU, to efficiently compute the multiplication, the division and the square root?
I'm wondering this because that was how it was done on mechanical calculators, so looks plausible to me that first processors somewhat mimicked existing technologies.
EDIT
I'm not asking what are the use of bitshift operators, but the reason why they were included back then. As every operation added had a cost in components, I imagine that they were striving for the smallest possibile number of components.
So the question is whether the bit shift operators are an innovation added when computers were put on a chip or whether very early CPUs also had these operators. And if so, why were they included in the instruction repertoire of early CPUs? What value did they add? (paragraph proposed by Walter Mitty)
My line of thought was that, since early computers were created to speed up the work made by human computer, who used mechanical calculators (which shift values to perform calculations), it was plausible to me that electronic computers were designed to use, at least partially, existing algorithms. I made this question because I wanted to know if there is some truth in this or I was completely wrong.
instruction-set alu
|
show 21 more comments
As far as I know, even simple RISC microcontroller have a bitshift operator, and honestly I had to use it only once when I had to compute a division on a MCU that could not do divisions in the ALU.
Were the bitshift operators included so it was possible, even with very simple ALU, to efficiently compute the multiplication, the division and the square root?
I'm wondering this because that was how it was done on mechanical calculators, so looks plausible to me that first processors somewhat mimicked existing technologies.
EDIT
I'm not asking what are the use of bitshift operators, but the reason why they were included back then. As every operation added had a cost in components, I imagine that they were striving for the smallest possibile number of components.
So the question is whether the bit shift operators are an innovation added when computers were put on a chip or whether very early CPUs also had these operators. And if so, why were they included in the instruction repertoire of early CPUs? What value did they add? (paragraph proposed by Walter Mitty)
My line of thought was that, since early computers were created to speed up the work made by human computer, who used mechanical calculators (which shift values to perform calculations), it was plausible to me that electronic computers were designed to use, at least partially, existing algorithms. I made this question because I wanted to know if there is some truth in this or I was completely wrong.
instruction-set alu
23
This is a good question. But I'm not sure how this is specific to Retrocomputing. Modern CPUs still have bit-shift instructions.
– manassehkatz
Jan 31 at 18:51
12
What do you mean 'was'? They're still there.
– user207421
Jan 31 at 22:47
3
@manassehkatz good point, but you could focus the question on why were they implemented back then? (the historic reason) which would make it arguably on-topic. I guess many of the uses weren't thought of back then. But then, addressing the OP: how would you ever write a non-trivial program in assembler without bit-shifting? ;)
– Felix Palmen
Jan 31 at 23:52
4
Harvard Mark I (as far as I can tell) had shift instructions. ENIAC had shift instructions. They've been there from day 1 because they're useful (and trivial to implement a single shift or rotate)
– Kelvin Sherlock
Feb 1 at 0:29
2
duplicate: Practical applications of bit shifting, practical applications of bitwise operations, Real world use cases of bitwise operators, Have you ever had to use bit shifting in real projects?
– phuclv
Feb 1 at 2:25
|
show 21 more comments
As far as I know, even simple RISC microcontroller have a bitshift operator, and honestly I had to use it only once when I had to compute a division on a MCU that could not do divisions in the ALU.
Were the bitshift operators included so it was possible, even with very simple ALU, to efficiently compute the multiplication, the division and the square root?
I'm wondering this because that was how it was done on mechanical calculators, so looks plausible to me that first processors somewhat mimicked existing technologies.
EDIT
I'm not asking what are the use of bitshift operators, but the reason why they were included back then. As every operation added had a cost in components, I imagine that they were striving for the smallest possibile number of components.
So the question is whether the bit shift operators are an innovation added when computers were put on a chip or whether very early CPUs also had these operators. And if so, why were they included in the instruction repertoire of early CPUs? What value did they add? (paragraph proposed by Walter Mitty)
My line of thought was that, since early computers were created to speed up the work made by human computer, who used mechanical calculators (which shift values to perform calculations), it was plausible to me that electronic computers were designed to use, at least partially, existing algorithms. I made this question because I wanted to know if there is some truth in this or I was completely wrong.
instruction-set alu
As far as I know, even simple RISC microcontroller have a bitshift operator, and honestly I had to use it only once when I had to compute a division on a MCU that could not do divisions in the ALU.
Were the bitshift operators included so it was possible, even with very simple ALU, to efficiently compute the multiplication, the division and the square root?
I'm wondering this because that was how it was done on mechanical calculators, so looks plausible to me that first processors somewhat mimicked existing technologies.
EDIT
I'm not asking what are the use of bitshift operators, but the reason why they were included back then. As every operation added had a cost in components, I imagine that they were striving for the smallest possibile number of components.
So the question is whether the bit shift operators are an innovation added when computers were put on a chip or whether very early CPUs also had these operators. And if so, why were they included in the instruction repertoire of early CPUs? What value did they add? (paragraph proposed by Walter Mitty)
My line of thought was that, since early computers were created to speed up the work made by human computer, who used mechanical calculators (which shift values to perform calculations), it was plausible to me that electronic computers were designed to use, at least partially, existing algorithms. I made this question because I wanted to know if there is some truth in this or I was completely wrong.
instruction-set alu
instruction-set alu
edited Feb 2 at 11:34
Onirepap
asked Jan 31 at 18:26


OnirepapOnirepap
4916
4916
23
This is a good question. But I'm not sure how this is specific to Retrocomputing. Modern CPUs still have bit-shift instructions.
– manassehkatz
Jan 31 at 18:51
12
What do you mean 'was'? They're still there.
– user207421
Jan 31 at 22:47
3
@manassehkatz good point, but you could focus the question on why were they implemented back then? (the historic reason) which would make it arguably on-topic. I guess many of the uses weren't thought of back then. But then, addressing the OP: how would you ever write a non-trivial program in assembler without bit-shifting? ;)
– Felix Palmen
Jan 31 at 23:52
4
Harvard Mark I (as far as I can tell) had shift instructions. ENIAC had shift instructions. They've been there from day 1 because they're useful (and trivial to implement a single shift or rotate)
– Kelvin Sherlock
Feb 1 at 0:29
2
duplicate: Practical applications of bit shifting, practical applications of bitwise operations, Real world use cases of bitwise operators, Have you ever had to use bit shifting in real projects?
– phuclv
Feb 1 at 2:25
|
show 21 more comments
23
This is a good question. But I'm not sure how this is specific to Retrocomputing. Modern CPUs still have bit-shift instructions.
– manassehkatz
Jan 31 at 18:51
12
What do you mean 'was'? They're still there.
– user207421
Jan 31 at 22:47
3
@manassehkatz good point, but you could focus the question on why were they implemented back then? (the historic reason) which would make it arguably on-topic. I guess many of the uses weren't thought of back then. But then, addressing the OP: how would you ever write a non-trivial program in assembler without bit-shifting? ;)
– Felix Palmen
Jan 31 at 23:52
4
Harvard Mark I (as far as I can tell) had shift instructions. ENIAC had shift instructions. They've been there from day 1 because they're useful (and trivial to implement a single shift or rotate)
– Kelvin Sherlock
Feb 1 at 0:29
2
duplicate: Practical applications of bit shifting, practical applications of bitwise operations, Real world use cases of bitwise operators, Have you ever had to use bit shifting in real projects?
– phuclv
Feb 1 at 2:25
23
23
This is a good question. But I'm not sure how this is specific to Retrocomputing. Modern CPUs still have bit-shift instructions.
– manassehkatz
Jan 31 at 18:51
This is a good question. But I'm not sure how this is specific to Retrocomputing. Modern CPUs still have bit-shift instructions.
– manassehkatz
Jan 31 at 18:51
12
12
What do you mean 'was'? They're still there.
– user207421
Jan 31 at 22:47
What do you mean 'was'? They're still there.
– user207421
Jan 31 at 22:47
3
3
@manassehkatz good point, but you could focus the question on why were they implemented back then? (the historic reason) which would make it arguably on-topic. I guess many of the uses weren't thought of back then. But then, addressing the OP: how would you ever write a non-trivial program in assembler without bit-shifting? ;)
– Felix Palmen
Jan 31 at 23:52
@manassehkatz good point, but you could focus the question on why were they implemented back then? (the historic reason) which would make it arguably on-topic. I guess many of the uses weren't thought of back then. But then, addressing the OP: how would you ever write a non-trivial program in assembler without bit-shifting? ;)
– Felix Palmen
Jan 31 at 23:52
4
4
Harvard Mark I (as far as I can tell) had shift instructions. ENIAC had shift instructions. They've been there from day 1 because they're useful (and trivial to implement a single shift or rotate)
– Kelvin Sherlock
Feb 1 at 0:29
Harvard Mark I (as far as I can tell) had shift instructions. ENIAC had shift instructions. They've been there from day 1 because they're useful (and trivial to implement a single shift or rotate)
– Kelvin Sherlock
Feb 1 at 0:29
2
2
duplicate: Practical applications of bit shifting, practical applications of bitwise operations, Real world use cases of bitwise operators, Have you ever had to use bit shifting in real projects?
– phuclv
Feb 1 at 2:25
duplicate: Practical applications of bit shifting, practical applications of bitwise operations, Real world use cases of bitwise operators, Have you ever had to use bit shifting in real projects?
– phuclv
Feb 1 at 2:25
|
show 21 more comments
5 Answers
5
active
oldest
votes
Some uses for a bitshift operation:
- implementing a more efficient multiplication than repeated adding
- implementing division algorithms
- implementing an algorithm for exponentiation of integers by other integers
- bitwise algorithms e.g. "how many one bits in this integer"
- fast multiplication and division by powers of 2 including indexing arrays of integer types (which tend to have sizes of a power of 2 of bytes).
- sending and receiving data on serial lines.
- manipulating bit fields within larger types.
- extracting portions of a combined type and aligning them into new types (Raffzahn)
- manipulating the fields of a software floating point implementation (e.g. aligning the mantissa for addition and normalising the same after any arithmetic operation) (Tommy)
- Accessing the digits in packed BCD formats (i.e. two digits per byte) (Raffzahn)
- Aligning the decimal point in fixed point arithmetic (davidbak)
- cryptography (forest, tofro)
- Aligning rasterised packed bitmaps when blitting them to screen memory (user3570736)
These are just off the top of my head. I'm sure you'll get many answers that include other uses for bit shifts.
I'm not asking what are the use of bitshift operators, but the reason why they were included back then
As I hope you can see from the list, even "back then" bit shift operations were regarded as extremely useful. In particular, it would be hard to write an efficient software multiplication routine without bit shifts.
5
Not to forget the even more important task of extracting portions of a combined type and aligning them into new types. That included everything from bitfields to BCD.
– Raffzahn
Jan 31 at 19:56
1
@Tommy I think those are technically included in my last bullet point although they may be large enough areas to merit a new category.
– JeremyP
Jan 31 at 20:46
1
Anything that involves feedback shift registers, used for random numbers and especially cryptography.
– tofro
Feb 1 at 8:23
1
Even the lowly UART has a bit shifting operation!
– Walter Mitty
Feb 1 at 14:07
1
@Raffzahn which one did I get wrong? Belay that, it was the BCD one wasn't it.
– JeremyP
Feb 1 at 20:00
|
show 10 more comments
Then you haven't done any embedded work, or anything to do with comms or protocols.
If you're working with embedded code, you need a way to get values into specific bits in a register. This is true even for the base processor, but especially so for microcontrollers which have extra registers to control the I/O, and for data going to and from devices such as ADCs and DACs. Bitwise operations are essential here, including bit shifting.
For an example of where this is important, I've recently been working with an ADC where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you turn a channel number (0-7) into the appropriate bits in this register, or vice versa, without a bitshift? You could use integer multiplies and divides, but that would be stupidly wasteful given the time those instructions take. (And under the surface, those instructions are using bitshifts themelves anyway!) Technically you could also use a case statement or lookup for each channel number, but that doesn't scale well - don't try using a 16-billion-term case statement to extract individual colours from a 24-bit RGB value!
In the same way, comms protocols regularly pack data into individual bits, especially in packet headers. Again, you need a way to get a number from a variable into any arbitrary group of bits in your protocol bytes, or vice versa. And the same problem has the same solution - you need bitshifting.
1
"where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you get a value in there without a bitshift?" -- bit masking? (and, or, xor)
– Felix Palmen
Feb 1 at 0:01
4
@FelixPalmen Bit Masking gets rid of the rest, but if you want to select those 3 bits and then (essentially) treat them as a number by themselves, you need to shift them. You could divide them, but a shift is a whole lot faster - and depending on what you need, you might not even need to bother with the bit masking.
– manassehkatz
Feb 1 at 1:26
1
@FelixPalmen If I start with values 0x000000, 0x200000, 0x400000, 0x600000 to 0xE00000, then I can put that value into the top 3 bits using bitwise AND/OR. But suppose I start with the channel number 0, 1, 2, 3 to 7. How do I turn the value 7 into 0xE00000 or vice versa?
– Graham
Feb 1 at 5:15
1
@FelixPalmen Bit masking is not enough. You either need a big if/else or case block, integer division or bit shift IN ADDITION TO BIT MASKING.
– slebetman
Feb 1 at 8:02
1
@Onirepap If you only need a single bit, sure. If you need the top 3 bits though, that's less easy. Unless you do "read bit 23 from register A, store result in bit 2 of register B, read bit 22 from register A, store result in bit 1 of register B, read bit 21 from register A, store result in bit 0 of register B". At which point you've effectively recreated an atomic read-and-bitshift with a less efficient, more complex set of instructions. Technically possible, I guess, but not a great plan.
– Graham
Feb 1 at 11:24
|
show 3 more comments
Along with the many uses of bitshifting (multiplying and dividing by powers of 2, bitmasks, etc.), bitshifting is also very cheap to implement: a simple implementation uses one multiplexer per bit to logical shift for each direction. Adding arithmetic shift is easy too: arithmetic left shift is identical to logical left shift, while arithmetic right shift only requires filling in shifted bits with the MSB.
Designers of old CPUs like the Z80 and MOS 6502 could afford to put in a little bit of extra logic, while hardware multiplication was significantly more resource-intensive and thus expensive. For example only the last 8-bit CPUs had hardware multiplication (notably Motorola 6809, the soon released 16-bit Motorola 68000 had multiplication instructions as well).
Here's a straightforward 3 bit logical left shifter (only 3 8-bit multiplexers are needed):
Using a three-level multiplexing arrangement will allow the job to be done more with fewer transistors than using a barrel shifter (eight one-of-eight deciders, activated by driving one of eight enable wires).
– supercat
Feb 1 at 1:34
...but will in practice often take more space because of the need to run parallel groups of wires between top-half and bottom-half muxes.
– supercat
Feb 1 at 1:41
FWIW: The 6502 only had single-bit shifts and rotates, which are even easier to implement in hardware. (If you wanted to shift by multiple bits, you'd simply repeat the operation.)
– duskwuff
Feb 3 at 21:20
add a comment |
In the days before floating point hardware there was fixed-point integer arithmetic! Bit shift instructions were used to implement the "scaling factor" - specifically, to adjust it when you multiplied (or divided) numbers and then had to rescale the number to achieve the desired precision. (Or when adding and subtracting if the operands had different scale factors.)
... and now a small break for a fun anecdote:
My first professional programming job involved working on a Honeywell 716 minicomputer. 16-bit word size. It had hardware floating point but it was very slow. It had left shift and right shift instructions but the shift amount was an immediate (4-bit) value in the instruction - there was no instruction or addressing mode where that value was in a register. I was doing a bunch of fixed point arithmetic for signal processing on inputs that had a large dynamic range. There was no single scaling factor I could use for a certain multiply because whatever scaling factor I picked would either overflow for large inputs or lose all precision for small inputs. So ... I computed the necessary scaling factor from the input values and poked it directly into the shift instruction I was about to execute! (Couldn't use a jump table to the correct shift instruction or duplicate code blocks: I was also under a tight memory budget; those 716s only had 32Kwords of memory total.)
First job and I had to break all the rules. Just graduated college and to get approval for that design and code I had to convince the top staff engineer so he could go to the Navy and get their approval.
3
"In the days before floating point hardware..." and in the days long after floating point hardware. In 2006/2007 I worked in a games company where floating point arithmetic was forbidden for portability reasons.
– Peter Taylor
Feb 1 at 10:47
1
... and in the days with FPGAs. I worked on a project from 1999-2001 to develop a low-cost car engine controller, using a 16-bit processor without floating-point support. Lots of fixed-point arithmetic. From then on, all processors I've used have had floating-point support and I breathed a sigh of relief at never needing fixed-point again. Until 3 years ago when I got involved with FPGA work - and suddenly everything is back in fixed-point processing again!
– Graham
Feb 1 at 14:16
@Graham - but check out Unums/Posits! Maybe some fresh air is coming to FPGA/ASIC work?
– davidbak
Feb 1 at 14:18
add a comment |
The fundamental unit of digital information is the bit. Bytes, words, pages, blocks, sectors, tracks, and what have you are all made up of bits. Just about everything that can be done with information can be built up from operations on bits.
Since at least as far back as the 1940s, numbers in computers have usually been represented as sets of bits. Thus the number 23 might be represented as [16, 4, 2, 1]. This is economical both for representation and manipulation.
Just about every processor has special operators for sensing and acting on the sign bit. I'm thinking about instructions like "transfer on minus". If you combine this with instructions for rotating or shifting bits in a register, you can build up just about any complex operation imaginable. The more common operations, like adding two numbers, are going to have instructions, even in a RISC processor. The less common operations are going to be done by bit manipulation.
@JeremyP has an excellent list of bit operations in his answer. It's only a partial list. I'll add telephone line management. If you have a bunch of telephone lines, it might be fairly easy to supply a processor with a set of bits, one bit for each line, where a one bit means, "on hook status has changed".
You can do just about anything, one little bit at a time.
Setting sign flag after operations shouldn't actually require shifting instructions, it's implemented directly
– qwr
Feb 1 at 17:34
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "648"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9026%2fwhat-was-the-main-purpose-of-bitshift-instructions-in-cpu%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
Some uses for a bitshift operation:
- implementing a more efficient multiplication than repeated adding
- implementing division algorithms
- implementing an algorithm for exponentiation of integers by other integers
- bitwise algorithms e.g. "how many one bits in this integer"
- fast multiplication and division by powers of 2 including indexing arrays of integer types (which tend to have sizes of a power of 2 of bytes).
- sending and receiving data on serial lines.
- manipulating bit fields within larger types.
- extracting portions of a combined type and aligning them into new types (Raffzahn)
- manipulating the fields of a software floating point implementation (e.g. aligning the mantissa for addition and normalising the same after any arithmetic operation) (Tommy)
- Accessing the digits in packed BCD formats (i.e. two digits per byte) (Raffzahn)
- Aligning the decimal point in fixed point arithmetic (davidbak)
- cryptography (forest, tofro)
- Aligning rasterised packed bitmaps when blitting them to screen memory (user3570736)
These are just off the top of my head. I'm sure you'll get many answers that include other uses for bit shifts.
I'm not asking what are the use of bitshift operators, but the reason why they were included back then
As I hope you can see from the list, even "back then" bit shift operations were regarded as extremely useful. In particular, it would be hard to write an efficient software multiplication routine without bit shifts.
5
Not to forget the even more important task of extracting portions of a combined type and aligning them into new types. That included everything from bitfields to BCD.
– Raffzahn
Jan 31 at 19:56
1
@Tommy I think those are technically included in my last bullet point although they may be large enough areas to merit a new category.
– JeremyP
Jan 31 at 20:46
1
Anything that involves feedback shift registers, used for random numbers and especially cryptography.
– tofro
Feb 1 at 8:23
1
Even the lowly UART has a bit shifting operation!
– Walter Mitty
Feb 1 at 14:07
1
@Raffzahn which one did I get wrong? Belay that, it was the BCD one wasn't it.
– JeremyP
Feb 1 at 20:00
|
show 10 more comments
Some uses for a bitshift operation:
- implementing a more efficient multiplication than repeated adding
- implementing division algorithms
- implementing an algorithm for exponentiation of integers by other integers
- bitwise algorithms e.g. "how many one bits in this integer"
- fast multiplication and division by powers of 2 including indexing arrays of integer types (which tend to have sizes of a power of 2 of bytes).
- sending and receiving data on serial lines.
- manipulating bit fields within larger types.
- extracting portions of a combined type and aligning them into new types (Raffzahn)
- manipulating the fields of a software floating point implementation (e.g. aligning the mantissa for addition and normalising the same after any arithmetic operation) (Tommy)
- Accessing the digits in packed BCD formats (i.e. two digits per byte) (Raffzahn)
- Aligning the decimal point in fixed point arithmetic (davidbak)
- cryptography (forest, tofro)
- Aligning rasterised packed bitmaps when blitting them to screen memory (user3570736)
These are just off the top of my head. I'm sure you'll get many answers that include other uses for bit shifts.
I'm not asking what are the use of bitshift operators, but the reason why they were included back then
As I hope you can see from the list, even "back then" bit shift operations were regarded as extremely useful. In particular, it would be hard to write an efficient software multiplication routine without bit shifts.
5
Not to forget the even more important task of extracting portions of a combined type and aligning them into new types. That included everything from bitfields to BCD.
– Raffzahn
Jan 31 at 19:56
1
@Tommy I think those are technically included in my last bullet point although they may be large enough areas to merit a new category.
– JeremyP
Jan 31 at 20:46
1
Anything that involves feedback shift registers, used for random numbers and especially cryptography.
– tofro
Feb 1 at 8:23
1
Even the lowly UART has a bit shifting operation!
– Walter Mitty
Feb 1 at 14:07
1
@Raffzahn which one did I get wrong? Belay that, it was the BCD one wasn't it.
– JeremyP
Feb 1 at 20:00
|
show 10 more comments
Some uses for a bitshift operation:
- implementing a more efficient multiplication than repeated adding
- implementing division algorithms
- implementing an algorithm for exponentiation of integers by other integers
- bitwise algorithms e.g. "how many one bits in this integer"
- fast multiplication and division by powers of 2 including indexing arrays of integer types (which tend to have sizes of a power of 2 of bytes).
- sending and receiving data on serial lines.
- manipulating bit fields within larger types.
- extracting portions of a combined type and aligning them into new types (Raffzahn)
- manipulating the fields of a software floating point implementation (e.g. aligning the mantissa for addition and normalising the same after any arithmetic operation) (Tommy)
- Accessing the digits in packed BCD formats (i.e. two digits per byte) (Raffzahn)
- Aligning the decimal point in fixed point arithmetic (davidbak)
- cryptography (forest, tofro)
- Aligning rasterised packed bitmaps when blitting them to screen memory (user3570736)
These are just off the top of my head. I'm sure you'll get many answers that include other uses for bit shifts.
I'm not asking what are the use of bitshift operators, but the reason why they were included back then
As I hope you can see from the list, even "back then" bit shift operations were regarded as extremely useful. In particular, it would be hard to write an efficient software multiplication routine without bit shifts.
Some uses for a bitshift operation:
- implementing a more efficient multiplication than repeated adding
- implementing division algorithms
- implementing an algorithm for exponentiation of integers by other integers
- bitwise algorithms e.g. "how many one bits in this integer"
- fast multiplication and division by powers of 2 including indexing arrays of integer types (which tend to have sizes of a power of 2 of bytes).
- sending and receiving data on serial lines.
- manipulating bit fields within larger types.
- extracting portions of a combined type and aligning them into new types (Raffzahn)
- manipulating the fields of a software floating point implementation (e.g. aligning the mantissa for addition and normalising the same after any arithmetic operation) (Tommy)
- Accessing the digits in packed BCD formats (i.e. two digits per byte) (Raffzahn)
- Aligning the decimal point in fixed point arithmetic (davidbak)
- cryptography (forest, tofro)
- Aligning rasterised packed bitmaps when blitting them to screen memory (user3570736)
These are just off the top of my head. I'm sure you'll get many answers that include other uses for bit shifts.
I'm not asking what are the use of bitshift operators, but the reason why they were included back then
As I hope you can see from the list, even "back then" bit shift operations were regarded as extremely useful. In particular, it would be hard to write an efficient software multiplication routine without bit shifts.
edited Feb 4 at 9:29
answered Jan 31 at 18:58
JeremyPJeremyP
5,20011930
5,20011930
5
Not to forget the even more important task of extracting portions of a combined type and aligning them into new types. That included everything from bitfields to BCD.
– Raffzahn
Jan 31 at 19:56
1
@Tommy I think those are technically included in my last bullet point although they may be large enough areas to merit a new category.
– JeremyP
Jan 31 at 20:46
1
Anything that involves feedback shift registers, used for random numbers and especially cryptography.
– tofro
Feb 1 at 8:23
1
Even the lowly UART has a bit shifting operation!
– Walter Mitty
Feb 1 at 14:07
1
@Raffzahn which one did I get wrong? Belay that, it was the BCD one wasn't it.
– JeremyP
Feb 1 at 20:00
|
show 10 more comments
5
Not to forget the even more important task of extracting portions of a combined type and aligning them into new types. That included everything from bitfields to BCD.
– Raffzahn
Jan 31 at 19:56
1
@Tommy I think those are technically included in my last bullet point although they may be large enough areas to merit a new category.
– JeremyP
Jan 31 at 20:46
1
Anything that involves feedback shift registers, used for random numbers and especially cryptography.
– tofro
Feb 1 at 8:23
1
Even the lowly UART has a bit shifting operation!
– Walter Mitty
Feb 1 at 14:07
1
@Raffzahn which one did I get wrong? Belay that, it was the BCD one wasn't it.
– JeremyP
Feb 1 at 20:00
5
5
Not to forget the even more important task of extracting portions of a combined type and aligning them into new types. That included everything from bitfields to BCD.
– Raffzahn
Jan 31 at 19:56
Not to forget the even more important task of extracting portions of a combined type and aligning them into new types. That included everything from bitfields to BCD.
– Raffzahn
Jan 31 at 19:56
1
1
@Tommy I think those are technically included in my last bullet point although they may be large enough areas to merit a new category.
– JeremyP
Jan 31 at 20:46
@Tommy I think those are technically included in my last bullet point although they may be large enough areas to merit a new category.
– JeremyP
Jan 31 at 20:46
1
1
Anything that involves feedback shift registers, used for random numbers and especially cryptography.
– tofro
Feb 1 at 8:23
Anything that involves feedback shift registers, used for random numbers and especially cryptography.
– tofro
Feb 1 at 8:23
1
1
Even the lowly UART has a bit shifting operation!
– Walter Mitty
Feb 1 at 14:07
Even the lowly UART has a bit shifting operation!
– Walter Mitty
Feb 1 at 14:07
1
1
@Raffzahn which one did I get wrong? Belay that, it was the BCD one wasn't it.
– JeremyP
Feb 1 at 20:00
@Raffzahn which one did I get wrong? Belay that, it was the BCD one wasn't it.
– JeremyP
Feb 1 at 20:00
|
show 10 more comments
Then you haven't done any embedded work, or anything to do with comms or protocols.
If you're working with embedded code, you need a way to get values into specific bits in a register. This is true even for the base processor, but especially so for microcontrollers which have extra registers to control the I/O, and for data going to and from devices such as ADCs and DACs. Bitwise operations are essential here, including bit shifting.
For an example of where this is important, I've recently been working with an ADC where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you turn a channel number (0-7) into the appropriate bits in this register, or vice versa, without a bitshift? You could use integer multiplies and divides, but that would be stupidly wasteful given the time those instructions take. (And under the surface, those instructions are using bitshifts themelves anyway!) Technically you could also use a case statement or lookup for each channel number, but that doesn't scale well - don't try using a 16-billion-term case statement to extract individual colours from a 24-bit RGB value!
In the same way, comms protocols regularly pack data into individual bits, especially in packet headers. Again, you need a way to get a number from a variable into any arbitrary group of bits in your protocol bytes, or vice versa. And the same problem has the same solution - you need bitshifting.
1
"where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you get a value in there without a bitshift?" -- bit masking? (and, or, xor)
– Felix Palmen
Feb 1 at 0:01
4
@FelixPalmen Bit Masking gets rid of the rest, but if you want to select those 3 bits and then (essentially) treat them as a number by themselves, you need to shift them. You could divide them, but a shift is a whole lot faster - and depending on what you need, you might not even need to bother with the bit masking.
– manassehkatz
Feb 1 at 1:26
1
@FelixPalmen If I start with values 0x000000, 0x200000, 0x400000, 0x600000 to 0xE00000, then I can put that value into the top 3 bits using bitwise AND/OR. But suppose I start with the channel number 0, 1, 2, 3 to 7. How do I turn the value 7 into 0xE00000 or vice versa?
– Graham
Feb 1 at 5:15
1
@FelixPalmen Bit masking is not enough. You either need a big if/else or case block, integer division or bit shift IN ADDITION TO BIT MASKING.
– slebetman
Feb 1 at 8:02
1
@Onirepap If you only need a single bit, sure. If you need the top 3 bits though, that's less easy. Unless you do "read bit 23 from register A, store result in bit 2 of register B, read bit 22 from register A, store result in bit 1 of register B, read bit 21 from register A, store result in bit 0 of register B". At which point you've effectively recreated an atomic read-and-bitshift with a less efficient, more complex set of instructions. Technically possible, I guess, but not a great plan.
– Graham
Feb 1 at 11:24
|
show 3 more comments
Then you haven't done any embedded work, or anything to do with comms or protocols.
If you're working with embedded code, you need a way to get values into specific bits in a register. This is true even for the base processor, but especially so for microcontrollers which have extra registers to control the I/O, and for data going to and from devices such as ADCs and DACs. Bitwise operations are essential here, including bit shifting.
For an example of where this is important, I've recently been working with an ADC where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you turn a channel number (0-7) into the appropriate bits in this register, or vice versa, without a bitshift? You could use integer multiplies and divides, but that would be stupidly wasteful given the time those instructions take. (And under the surface, those instructions are using bitshifts themelves anyway!) Technically you could also use a case statement or lookup for each channel number, but that doesn't scale well - don't try using a 16-billion-term case statement to extract individual colours from a 24-bit RGB value!
In the same way, comms protocols regularly pack data into individual bits, especially in packet headers. Again, you need a way to get a number from a variable into any arbitrary group of bits in your protocol bytes, or vice versa. And the same problem has the same solution - you need bitshifting.
1
"where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you get a value in there without a bitshift?" -- bit masking? (and, or, xor)
– Felix Palmen
Feb 1 at 0:01
4
@FelixPalmen Bit Masking gets rid of the rest, but if you want to select those 3 bits and then (essentially) treat them as a number by themselves, you need to shift them. You could divide them, but a shift is a whole lot faster - and depending on what you need, you might not even need to bother with the bit masking.
– manassehkatz
Feb 1 at 1:26
1
@FelixPalmen If I start with values 0x000000, 0x200000, 0x400000, 0x600000 to 0xE00000, then I can put that value into the top 3 bits using bitwise AND/OR. But suppose I start with the channel number 0, 1, 2, 3 to 7. How do I turn the value 7 into 0xE00000 or vice versa?
– Graham
Feb 1 at 5:15
1
@FelixPalmen Bit masking is not enough. You either need a big if/else or case block, integer division or bit shift IN ADDITION TO BIT MASKING.
– slebetman
Feb 1 at 8:02
1
@Onirepap If you only need a single bit, sure. If you need the top 3 bits though, that's less easy. Unless you do "read bit 23 from register A, store result in bit 2 of register B, read bit 22 from register A, store result in bit 1 of register B, read bit 21 from register A, store result in bit 0 of register B". At which point you've effectively recreated an atomic read-and-bitshift with a less efficient, more complex set of instructions. Technically possible, I guess, but not a great plan.
– Graham
Feb 1 at 11:24
|
show 3 more comments
Then you haven't done any embedded work, or anything to do with comms or protocols.
If you're working with embedded code, you need a way to get values into specific bits in a register. This is true even for the base processor, but especially so for microcontrollers which have extra registers to control the I/O, and for data going to and from devices such as ADCs and DACs. Bitwise operations are essential here, including bit shifting.
For an example of where this is important, I've recently been working with an ADC where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you turn a channel number (0-7) into the appropriate bits in this register, or vice versa, without a bitshift? You could use integer multiplies and divides, but that would be stupidly wasteful given the time those instructions take. (And under the surface, those instructions are using bitshifts themelves anyway!) Technically you could also use a case statement or lookup for each channel number, but that doesn't scale well - don't try using a 16-billion-term case statement to extract individual colours from a 24-bit RGB value!
In the same way, comms protocols regularly pack data into individual bits, especially in packet headers. Again, you need a way to get a number from a variable into any arbitrary group of bits in your protocol bytes, or vice versa. And the same problem has the same solution - you need bitshifting.
Then you haven't done any embedded work, or anything to do with comms or protocols.
If you're working with embedded code, you need a way to get values into specific bits in a register. This is true even for the base processor, but especially so for microcontrollers which have extra registers to control the I/O, and for data going to and from devices such as ADCs and DACs. Bitwise operations are essential here, including bit shifting.
For an example of where this is important, I've recently been working with an ADC where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you turn a channel number (0-7) into the appropriate bits in this register, or vice versa, without a bitshift? You could use integer multiplies and divides, but that would be stupidly wasteful given the time those instructions take. (And under the surface, those instructions are using bitshifts themelves anyway!) Technically you could also use a case statement or lookup for each channel number, but that doesn't scale well - don't try using a 16-billion-term case statement to extract individual colours from a 24-bit RGB value!
In the same way, comms protocols regularly pack data into individual bits, especially in packet headers. Again, you need a way to get a number from a variable into any arbitrary group of bits in your protocol bytes, or vice versa. And the same problem has the same solution - you need bitshifting.
edited Feb 1 at 11:20
answered Jan 31 at 23:58


GrahamGraham
76727
76727
1
"where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you get a value in there without a bitshift?" -- bit masking? (and, or, xor)
– Felix Palmen
Feb 1 at 0:01
4
@FelixPalmen Bit Masking gets rid of the rest, but if you want to select those 3 bits and then (essentially) treat them as a number by themselves, you need to shift them. You could divide them, but a shift is a whole lot faster - and depending on what you need, you might not even need to bother with the bit masking.
– manassehkatz
Feb 1 at 1:26
1
@FelixPalmen If I start with values 0x000000, 0x200000, 0x400000, 0x600000 to 0xE00000, then I can put that value into the top 3 bits using bitwise AND/OR. But suppose I start with the channel number 0, 1, 2, 3 to 7. How do I turn the value 7 into 0xE00000 or vice versa?
– Graham
Feb 1 at 5:15
1
@FelixPalmen Bit masking is not enough. You either need a big if/else or case block, integer division or bit shift IN ADDITION TO BIT MASKING.
– slebetman
Feb 1 at 8:02
1
@Onirepap If you only need a single bit, sure. If you need the top 3 bits though, that's less easy. Unless you do "read bit 23 from register A, store result in bit 2 of register B, read bit 22 from register A, store result in bit 1 of register B, read bit 21 from register A, store result in bit 0 of register B". At which point you've effectively recreated an atomic read-and-bitshift with a less efficient, more complex set of instructions. Technically possible, I guess, but not a great plan.
– Graham
Feb 1 at 11:24
|
show 3 more comments
1
"where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you get a value in there without a bitshift?" -- bit masking? (and, or, xor)
– Felix Palmen
Feb 1 at 0:01
4
@FelixPalmen Bit Masking gets rid of the rest, but if you want to select those 3 bits and then (essentially) treat them as a number by themselves, you need to shift them. You could divide them, but a shift is a whole lot faster - and depending on what you need, you might not even need to bother with the bit masking.
– manassehkatz
Feb 1 at 1:26
1
@FelixPalmen If I start with values 0x000000, 0x200000, 0x400000, 0x600000 to 0xE00000, then I can put that value into the top 3 bits using bitwise AND/OR. But suppose I start with the channel number 0, 1, 2, 3 to 7. How do I turn the value 7 into 0xE00000 or vice versa?
– Graham
Feb 1 at 5:15
1
@FelixPalmen Bit masking is not enough. You either need a big if/else or case block, integer division or bit shift IN ADDITION TO BIT MASKING.
– slebetman
Feb 1 at 8:02
1
@Onirepap If you only need a single bit, sure. If you need the top 3 bits though, that's less easy. Unless you do "read bit 23 from register A, store result in bit 2 of register B, read bit 22 from register A, store result in bit 1 of register B, read bit 21 from register A, store result in bit 0 of register B". At which point you've effectively recreated an atomic read-and-bitshift with a less efficient, more complex set of instructions. Technically possible, I guess, but not a great plan.
– Graham
Feb 1 at 11:24
1
1
"where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you get a value in there without a bitshift?" -- bit masking? (and, or, xor)
– Felix Palmen
Feb 1 at 0:01
"where the top 3 bits of its 24-bit register specify which of the 8 channels to convert. How would you get a value in there without a bitshift?" -- bit masking? (and, or, xor)
– Felix Palmen
Feb 1 at 0:01
4
4
@FelixPalmen Bit Masking gets rid of the rest, but if you want to select those 3 bits and then (essentially) treat them as a number by themselves, you need to shift them. You could divide them, but a shift is a whole lot faster - and depending on what you need, you might not even need to bother with the bit masking.
– manassehkatz
Feb 1 at 1:26
@FelixPalmen Bit Masking gets rid of the rest, but if you want to select those 3 bits and then (essentially) treat them as a number by themselves, you need to shift them. You could divide them, but a shift is a whole lot faster - and depending on what you need, you might not even need to bother with the bit masking.
– manassehkatz
Feb 1 at 1:26
1
1
@FelixPalmen If I start with values 0x000000, 0x200000, 0x400000, 0x600000 to 0xE00000, then I can put that value into the top 3 bits using bitwise AND/OR. But suppose I start with the channel number 0, 1, 2, 3 to 7. How do I turn the value 7 into 0xE00000 or vice versa?
– Graham
Feb 1 at 5:15
@FelixPalmen If I start with values 0x000000, 0x200000, 0x400000, 0x600000 to 0xE00000, then I can put that value into the top 3 bits using bitwise AND/OR. But suppose I start with the channel number 0, 1, 2, 3 to 7. How do I turn the value 7 into 0xE00000 or vice versa?
– Graham
Feb 1 at 5:15
1
1
@FelixPalmen Bit masking is not enough. You either need a big if/else or case block, integer division or bit shift IN ADDITION TO BIT MASKING.
– slebetman
Feb 1 at 8:02
@FelixPalmen Bit masking is not enough. You either need a big if/else or case block, integer division or bit shift IN ADDITION TO BIT MASKING.
– slebetman
Feb 1 at 8:02
1
1
@Onirepap If you only need a single bit, sure. If you need the top 3 bits though, that's less easy. Unless you do "read bit 23 from register A, store result in bit 2 of register B, read bit 22 from register A, store result in bit 1 of register B, read bit 21 from register A, store result in bit 0 of register B". At which point you've effectively recreated an atomic read-and-bitshift with a less efficient, more complex set of instructions. Technically possible, I guess, but not a great plan.
– Graham
Feb 1 at 11:24
@Onirepap If you only need a single bit, sure. If you need the top 3 bits though, that's less easy. Unless you do "read bit 23 from register A, store result in bit 2 of register B, read bit 22 from register A, store result in bit 1 of register B, read bit 21 from register A, store result in bit 0 of register B". At which point you've effectively recreated an atomic read-and-bitshift with a less efficient, more complex set of instructions. Technically possible, I guess, but not a great plan.
– Graham
Feb 1 at 11:24
|
show 3 more comments
Along with the many uses of bitshifting (multiplying and dividing by powers of 2, bitmasks, etc.), bitshifting is also very cheap to implement: a simple implementation uses one multiplexer per bit to logical shift for each direction. Adding arithmetic shift is easy too: arithmetic left shift is identical to logical left shift, while arithmetic right shift only requires filling in shifted bits with the MSB.
Designers of old CPUs like the Z80 and MOS 6502 could afford to put in a little bit of extra logic, while hardware multiplication was significantly more resource-intensive and thus expensive. For example only the last 8-bit CPUs had hardware multiplication (notably Motorola 6809, the soon released 16-bit Motorola 68000 had multiplication instructions as well).
Here's a straightforward 3 bit logical left shifter (only 3 8-bit multiplexers are needed):
Using a three-level multiplexing arrangement will allow the job to be done more with fewer transistors than using a barrel shifter (eight one-of-eight deciders, activated by driving one of eight enable wires).
– supercat
Feb 1 at 1:34
...but will in practice often take more space because of the need to run parallel groups of wires between top-half and bottom-half muxes.
– supercat
Feb 1 at 1:41
FWIW: The 6502 only had single-bit shifts and rotates, which are even easier to implement in hardware. (If you wanted to shift by multiple bits, you'd simply repeat the operation.)
– duskwuff
Feb 3 at 21:20
add a comment |
Along with the many uses of bitshifting (multiplying and dividing by powers of 2, bitmasks, etc.), bitshifting is also very cheap to implement: a simple implementation uses one multiplexer per bit to logical shift for each direction. Adding arithmetic shift is easy too: arithmetic left shift is identical to logical left shift, while arithmetic right shift only requires filling in shifted bits with the MSB.
Designers of old CPUs like the Z80 and MOS 6502 could afford to put in a little bit of extra logic, while hardware multiplication was significantly more resource-intensive and thus expensive. For example only the last 8-bit CPUs had hardware multiplication (notably Motorola 6809, the soon released 16-bit Motorola 68000 had multiplication instructions as well).
Here's a straightforward 3 bit logical left shifter (only 3 8-bit multiplexers are needed):
Using a three-level multiplexing arrangement will allow the job to be done more with fewer transistors than using a barrel shifter (eight one-of-eight deciders, activated by driving one of eight enable wires).
– supercat
Feb 1 at 1:34
...but will in practice often take more space because of the need to run parallel groups of wires between top-half and bottom-half muxes.
– supercat
Feb 1 at 1:41
FWIW: The 6502 only had single-bit shifts and rotates, which are even easier to implement in hardware. (If you wanted to shift by multiple bits, you'd simply repeat the operation.)
– duskwuff
Feb 3 at 21:20
add a comment |
Along with the many uses of bitshifting (multiplying and dividing by powers of 2, bitmasks, etc.), bitshifting is also very cheap to implement: a simple implementation uses one multiplexer per bit to logical shift for each direction. Adding arithmetic shift is easy too: arithmetic left shift is identical to logical left shift, while arithmetic right shift only requires filling in shifted bits with the MSB.
Designers of old CPUs like the Z80 and MOS 6502 could afford to put in a little bit of extra logic, while hardware multiplication was significantly more resource-intensive and thus expensive. For example only the last 8-bit CPUs had hardware multiplication (notably Motorola 6809, the soon released 16-bit Motorola 68000 had multiplication instructions as well).
Here's a straightforward 3 bit logical left shifter (only 3 8-bit multiplexers are needed):
Along with the many uses of bitshifting (multiplying and dividing by powers of 2, bitmasks, etc.), bitshifting is also very cheap to implement: a simple implementation uses one multiplexer per bit to logical shift for each direction. Adding arithmetic shift is easy too: arithmetic left shift is identical to logical left shift, while arithmetic right shift only requires filling in shifted bits with the MSB.
Designers of old CPUs like the Z80 and MOS 6502 could afford to put in a little bit of extra logic, while hardware multiplication was significantly more resource-intensive and thus expensive. For example only the last 8-bit CPUs had hardware multiplication (notably Motorola 6809, the soon released 16-bit Motorola 68000 had multiplication instructions as well).
Here's a straightforward 3 bit logical left shifter (only 3 8-bit multiplexers are needed):
answered Feb 1 at 1:27
qwrqwr
1634
1634
Using a three-level multiplexing arrangement will allow the job to be done more with fewer transistors than using a barrel shifter (eight one-of-eight deciders, activated by driving one of eight enable wires).
– supercat
Feb 1 at 1:34
...but will in practice often take more space because of the need to run parallel groups of wires between top-half and bottom-half muxes.
– supercat
Feb 1 at 1:41
FWIW: The 6502 only had single-bit shifts and rotates, which are even easier to implement in hardware. (If you wanted to shift by multiple bits, you'd simply repeat the operation.)
– duskwuff
Feb 3 at 21:20
add a comment |
Using a three-level multiplexing arrangement will allow the job to be done more with fewer transistors than using a barrel shifter (eight one-of-eight deciders, activated by driving one of eight enable wires).
– supercat
Feb 1 at 1:34
...but will in practice often take more space because of the need to run parallel groups of wires between top-half and bottom-half muxes.
– supercat
Feb 1 at 1:41
FWIW: The 6502 only had single-bit shifts and rotates, which are even easier to implement in hardware. (If you wanted to shift by multiple bits, you'd simply repeat the operation.)
– duskwuff
Feb 3 at 21:20
Using a three-level multiplexing arrangement will allow the job to be done more with fewer transistors than using a barrel shifter (eight one-of-eight deciders, activated by driving one of eight enable wires).
– supercat
Feb 1 at 1:34
Using a three-level multiplexing arrangement will allow the job to be done more with fewer transistors than using a barrel shifter (eight one-of-eight deciders, activated by driving one of eight enable wires).
– supercat
Feb 1 at 1:34
...but will in practice often take more space because of the need to run parallel groups of wires between top-half and bottom-half muxes.
– supercat
Feb 1 at 1:41
...but will in practice often take more space because of the need to run parallel groups of wires between top-half and bottom-half muxes.
– supercat
Feb 1 at 1:41
FWIW: The 6502 only had single-bit shifts and rotates, which are even easier to implement in hardware. (If you wanted to shift by multiple bits, you'd simply repeat the operation.)
– duskwuff
Feb 3 at 21:20
FWIW: The 6502 only had single-bit shifts and rotates, which are even easier to implement in hardware. (If you wanted to shift by multiple bits, you'd simply repeat the operation.)
– duskwuff
Feb 3 at 21:20
add a comment |
In the days before floating point hardware there was fixed-point integer arithmetic! Bit shift instructions were used to implement the "scaling factor" - specifically, to adjust it when you multiplied (or divided) numbers and then had to rescale the number to achieve the desired precision. (Or when adding and subtracting if the operands had different scale factors.)
... and now a small break for a fun anecdote:
My first professional programming job involved working on a Honeywell 716 minicomputer. 16-bit word size. It had hardware floating point but it was very slow. It had left shift and right shift instructions but the shift amount was an immediate (4-bit) value in the instruction - there was no instruction or addressing mode where that value was in a register. I was doing a bunch of fixed point arithmetic for signal processing on inputs that had a large dynamic range. There was no single scaling factor I could use for a certain multiply because whatever scaling factor I picked would either overflow for large inputs or lose all precision for small inputs. So ... I computed the necessary scaling factor from the input values and poked it directly into the shift instruction I was about to execute! (Couldn't use a jump table to the correct shift instruction or duplicate code blocks: I was also under a tight memory budget; those 716s only had 32Kwords of memory total.)
First job and I had to break all the rules. Just graduated college and to get approval for that design and code I had to convince the top staff engineer so he could go to the Navy and get their approval.
3
"In the days before floating point hardware..." and in the days long after floating point hardware. In 2006/2007 I worked in a games company where floating point arithmetic was forbidden for portability reasons.
– Peter Taylor
Feb 1 at 10:47
1
... and in the days with FPGAs. I worked on a project from 1999-2001 to develop a low-cost car engine controller, using a 16-bit processor without floating-point support. Lots of fixed-point arithmetic. From then on, all processors I've used have had floating-point support and I breathed a sigh of relief at never needing fixed-point again. Until 3 years ago when I got involved with FPGA work - and suddenly everything is back in fixed-point processing again!
– Graham
Feb 1 at 14:16
@Graham - but check out Unums/Posits! Maybe some fresh air is coming to FPGA/ASIC work?
– davidbak
Feb 1 at 14:18
add a comment |
In the days before floating point hardware there was fixed-point integer arithmetic! Bit shift instructions were used to implement the "scaling factor" - specifically, to adjust it when you multiplied (or divided) numbers and then had to rescale the number to achieve the desired precision. (Or when adding and subtracting if the operands had different scale factors.)
... and now a small break for a fun anecdote:
My first professional programming job involved working on a Honeywell 716 minicomputer. 16-bit word size. It had hardware floating point but it was very slow. It had left shift and right shift instructions but the shift amount was an immediate (4-bit) value in the instruction - there was no instruction or addressing mode where that value was in a register. I was doing a bunch of fixed point arithmetic for signal processing on inputs that had a large dynamic range. There was no single scaling factor I could use for a certain multiply because whatever scaling factor I picked would either overflow for large inputs or lose all precision for small inputs. So ... I computed the necessary scaling factor from the input values and poked it directly into the shift instruction I was about to execute! (Couldn't use a jump table to the correct shift instruction or duplicate code blocks: I was also under a tight memory budget; those 716s only had 32Kwords of memory total.)
First job and I had to break all the rules. Just graduated college and to get approval for that design and code I had to convince the top staff engineer so he could go to the Navy and get their approval.
3
"In the days before floating point hardware..." and in the days long after floating point hardware. In 2006/2007 I worked in a games company where floating point arithmetic was forbidden for portability reasons.
– Peter Taylor
Feb 1 at 10:47
1
... and in the days with FPGAs. I worked on a project from 1999-2001 to develop a low-cost car engine controller, using a 16-bit processor without floating-point support. Lots of fixed-point arithmetic. From then on, all processors I've used have had floating-point support and I breathed a sigh of relief at never needing fixed-point again. Until 3 years ago when I got involved with FPGA work - and suddenly everything is back in fixed-point processing again!
– Graham
Feb 1 at 14:16
@Graham - but check out Unums/Posits! Maybe some fresh air is coming to FPGA/ASIC work?
– davidbak
Feb 1 at 14:18
add a comment |
In the days before floating point hardware there was fixed-point integer arithmetic! Bit shift instructions were used to implement the "scaling factor" - specifically, to adjust it when you multiplied (or divided) numbers and then had to rescale the number to achieve the desired precision. (Or when adding and subtracting if the operands had different scale factors.)
... and now a small break for a fun anecdote:
My first professional programming job involved working on a Honeywell 716 minicomputer. 16-bit word size. It had hardware floating point but it was very slow. It had left shift and right shift instructions but the shift amount was an immediate (4-bit) value in the instruction - there was no instruction or addressing mode where that value was in a register. I was doing a bunch of fixed point arithmetic for signal processing on inputs that had a large dynamic range. There was no single scaling factor I could use for a certain multiply because whatever scaling factor I picked would either overflow for large inputs or lose all precision for small inputs. So ... I computed the necessary scaling factor from the input values and poked it directly into the shift instruction I was about to execute! (Couldn't use a jump table to the correct shift instruction or duplicate code blocks: I was also under a tight memory budget; those 716s only had 32Kwords of memory total.)
First job and I had to break all the rules. Just graduated college and to get approval for that design and code I had to convince the top staff engineer so he could go to the Navy and get their approval.
In the days before floating point hardware there was fixed-point integer arithmetic! Bit shift instructions were used to implement the "scaling factor" - specifically, to adjust it when you multiplied (or divided) numbers and then had to rescale the number to achieve the desired precision. (Or when adding and subtracting if the operands had different scale factors.)
... and now a small break for a fun anecdote:
My first professional programming job involved working on a Honeywell 716 minicomputer. 16-bit word size. It had hardware floating point but it was very slow. It had left shift and right shift instructions but the shift amount was an immediate (4-bit) value in the instruction - there was no instruction or addressing mode where that value was in a register. I was doing a bunch of fixed point arithmetic for signal processing on inputs that had a large dynamic range. There was no single scaling factor I could use for a certain multiply because whatever scaling factor I picked would either overflow for large inputs or lose all precision for small inputs. So ... I computed the necessary scaling factor from the input values and poked it directly into the shift instruction I was about to execute! (Couldn't use a jump table to the correct shift instruction or duplicate code blocks: I was also under a tight memory budget; those 716s only had 32Kwords of memory total.)
First job and I had to break all the rules. Just graduated college and to get approval for that design and code I had to convince the top staff engineer so he could go to the Navy and get their approval.
answered Feb 1 at 7:24


davidbakdavidbak
769410
769410
3
"In the days before floating point hardware..." and in the days long after floating point hardware. In 2006/2007 I worked in a games company where floating point arithmetic was forbidden for portability reasons.
– Peter Taylor
Feb 1 at 10:47
1
... and in the days with FPGAs. I worked on a project from 1999-2001 to develop a low-cost car engine controller, using a 16-bit processor without floating-point support. Lots of fixed-point arithmetic. From then on, all processors I've used have had floating-point support and I breathed a sigh of relief at never needing fixed-point again. Until 3 years ago when I got involved with FPGA work - and suddenly everything is back in fixed-point processing again!
– Graham
Feb 1 at 14:16
@Graham - but check out Unums/Posits! Maybe some fresh air is coming to FPGA/ASIC work?
– davidbak
Feb 1 at 14:18
add a comment |
3
"In the days before floating point hardware..." and in the days long after floating point hardware. In 2006/2007 I worked in a games company where floating point arithmetic was forbidden for portability reasons.
– Peter Taylor
Feb 1 at 10:47
1
... and in the days with FPGAs. I worked on a project from 1999-2001 to develop a low-cost car engine controller, using a 16-bit processor without floating-point support. Lots of fixed-point arithmetic. From then on, all processors I've used have had floating-point support and I breathed a sigh of relief at never needing fixed-point again. Until 3 years ago when I got involved with FPGA work - and suddenly everything is back in fixed-point processing again!
– Graham
Feb 1 at 14:16
@Graham - but check out Unums/Posits! Maybe some fresh air is coming to FPGA/ASIC work?
– davidbak
Feb 1 at 14:18
3
3
"In the days before floating point hardware..." and in the days long after floating point hardware. In 2006/2007 I worked in a games company where floating point arithmetic was forbidden for portability reasons.
– Peter Taylor
Feb 1 at 10:47
"In the days before floating point hardware..." and in the days long after floating point hardware. In 2006/2007 I worked in a games company where floating point arithmetic was forbidden for portability reasons.
– Peter Taylor
Feb 1 at 10:47
1
1
... and in the days with FPGAs. I worked on a project from 1999-2001 to develop a low-cost car engine controller, using a 16-bit processor without floating-point support. Lots of fixed-point arithmetic. From then on, all processors I've used have had floating-point support and I breathed a sigh of relief at never needing fixed-point again. Until 3 years ago when I got involved with FPGA work - and suddenly everything is back in fixed-point processing again!
– Graham
Feb 1 at 14:16
... and in the days with FPGAs. I worked on a project from 1999-2001 to develop a low-cost car engine controller, using a 16-bit processor without floating-point support. Lots of fixed-point arithmetic. From then on, all processors I've used have had floating-point support and I breathed a sigh of relief at never needing fixed-point again. Until 3 years ago when I got involved with FPGA work - and suddenly everything is back in fixed-point processing again!
– Graham
Feb 1 at 14:16
@Graham - but check out Unums/Posits! Maybe some fresh air is coming to FPGA/ASIC work?
– davidbak
Feb 1 at 14:18
@Graham - but check out Unums/Posits! Maybe some fresh air is coming to FPGA/ASIC work?
– davidbak
Feb 1 at 14:18
add a comment |
The fundamental unit of digital information is the bit. Bytes, words, pages, blocks, sectors, tracks, and what have you are all made up of bits. Just about everything that can be done with information can be built up from operations on bits.
Since at least as far back as the 1940s, numbers in computers have usually been represented as sets of bits. Thus the number 23 might be represented as [16, 4, 2, 1]. This is economical both for representation and manipulation.
Just about every processor has special operators for sensing and acting on the sign bit. I'm thinking about instructions like "transfer on minus". If you combine this with instructions for rotating or shifting bits in a register, you can build up just about any complex operation imaginable. The more common operations, like adding two numbers, are going to have instructions, even in a RISC processor. The less common operations are going to be done by bit manipulation.
@JeremyP has an excellent list of bit operations in his answer. It's only a partial list. I'll add telephone line management. If you have a bunch of telephone lines, it might be fairly easy to supply a processor with a set of bits, one bit for each line, where a one bit means, "on hook status has changed".
You can do just about anything, one little bit at a time.
Setting sign flag after operations shouldn't actually require shifting instructions, it's implemented directly
– qwr
Feb 1 at 17:34
add a comment |
The fundamental unit of digital information is the bit. Bytes, words, pages, blocks, sectors, tracks, and what have you are all made up of bits. Just about everything that can be done with information can be built up from operations on bits.
Since at least as far back as the 1940s, numbers in computers have usually been represented as sets of bits. Thus the number 23 might be represented as [16, 4, 2, 1]. This is economical both for representation and manipulation.
Just about every processor has special operators for sensing and acting on the sign bit. I'm thinking about instructions like "transfer on minus". If you combine this with instructions for rotating or shifting bits in a register, you can build up just about any complex operation imaginable. The more common operations, like adding two numbers, are going to have instructions, even in a RISC processor. The less common operations are going to be done by bit manipulation.
@JeremyP has an excellent list of bit operations in his answer. It's only a partial list. I'll add telephone line management. If you have a bunch of telephone lines, it might be fairly easy to supply a processor with a set of bits, one bit for each line, where a one bit means, "on hook status has changed".
You can do just about anything, one little bit at a time.
Setting sign flag after operations shouldn't actually require shifting instructions, it's implemented directly
– qwr
Feb 1 at 17:34
add a comment |
The fundamental unit of digital information is the bit. Bytes, words, pages, blocks, sectors, tracks, and what have you are all made up of bits. Just about everything that can be done with information can be built up from operations on bits.
Since at least as far back as the 1940s, numbers in computers have usually been represented as sets of bits. Thus the number 23 might be represented as [16, 4, 2, 1]. This is economical both for representation and manipulation.
Just about every processor has special operators for sensing and acting on the sign bit. I'm thinking about instructions like "transfer on minus". If you combine this with instructions for rotating or shifting bits in a register, you can build up just about any complex operation imaginable. The more common operations, like adding two numbers, are going to have instructions, even in a RISC processor. The less common operations are going to be done by bit manipulation.
@JeremyP has an excellent list of bit operations in his answer. It's only a partial list. I'll add telephone line management. If you have a bunch of telephone lines, it might be fairly easy to supply a processor with a set of bits, one bit for each line, where a one bit means, "on hook status has changed".
You can do just about anything, one little bit at a time.
The fundamental unit of digital information is the bit. Bytes, words, pages, blocks, sectors, tracks, and what have you are all made up of bits. Just about everything that can be done with information can be built up from operations on bits.
Since at least as far back as the 1940s, numbers in computers have usually been represented as sets of bits. Thus the number 23 might be represented as [16, 4, 2, 1]. This is economical both for representation and manipulation.
Just about every processor has special operators for sensing and acting on the sign bit. I'm thinking about instructions like "transfer on minus". If you combine this with instructions for rotating or shifting bits in a register, you can build up just about any complex operation imaginable. The more common operations, like adding two numbers, are going to have instructions, even in a RISC processor. The less common operations are going to be done by bit manipulation.
@JeremyP has an excellent list of bit operations in his answer. It's only a partial list. I'll add telephone line management. If you have a bunch of telephone lines, it might be fairly easy to supply a processor with a set of bits, one bit for each line, where a one bit means, "on hook status has changed".
You can do just about anything, one little bit at a time.
edited Feb 1 at 14:08
answered Feb 1 at 13:22
Walter MittyWalter Mitty
672311
672311
Setting sign flag after operations shouldn't actually require shifting instructions, it's implemented directly
– qwr
Feb 1 at 17:34
add a comment |
Setting sign flag after operations shouldn't actually require shifting instructions, it's implemented directly
– qwr
Feb 1 at 17:34
Setting sign flag after operations shouldn't actually require shifting instructions, it's implemented directly
– qwr
Feb 1 at 17:34
Setting sign flag after operations shouldn't actually require shifting instructions, it's implemented directly
– qwr
Feb 1 at 17:34
add a comment |
Thanks for contributing an answer to Retrocomputing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9026%2fwhat-was-the-main-purpose-of-bitshift-instructions-in-cpu%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
23
This is a good question. But I'm not sure how this is specific to Retrocomputing. Modern CPUs still have bit-shift instructions.
– manassehkatz
Jan 31 at 18:51
12
What do you mean 'was'? They're still there.
– user207421
Jan 31 at 22:47
3
@manassehkatz good point, but you could focus the question on why were they implemented back then? (the historic reason) which would make it arguably on-topic. I guess many of the uses weren't thought of back then. But then, addressing the OP: how would you ever write a non-trivial program in assembler without bit-shifting? ;)
– Felix Palmen
Jan 31 at 23:52
4
Harvard Mark I (as far as I can tell) had shift instructions. ENIAC had shift instructions. They've been there from day 1 because they're useful (and trivial to implement a single shift or rotate)
– Kelvin Sherlock
Feb 1 at 0:29
2
duplicate: Practical applications of bit shifting, practical applications of bitwise operations, Real world use cases of bitwise operators, Have you ever had to use bit shifting in real projects?
– phuclv
Feb 1 at 2:25