Discussion:
How many x86 instructions?
(too old to reply)
Yousuf Khan
2014-02-20 23:26:55 UTC
Permalink
I was asked this question recently, and I just realized that I really
don't know the answer to this. I may have known at one time, but I don't
anymore, as things have moved on since I last used to do assembly
programming. How many instructions are there in modern x86 processors?

These days it seems more practical to just list the number of x86
instruction set extensions than to count up just the individual
instructions themselves. But even the number of x86 instruction set
extensions are becoming unmanageable: x86-16, x86-32, x64, x87, MMX,
SSE, 3DNow, VT-X, AMD-V, AVX, AES, etc., etc.

I searched around just looking for a simple count of instructions, and I
couldn't find them.

Yousuf Khan
Gene E. Bloch
2014-02-21 01:19:02 UTC
Permalink
Post by Yousuf Khan
I was asked this question recently, and I just realized that I really
don't know the answer to this. I may have known at one time, but I
don't anymore, as things have moved on since I last used to do
assembly programming. How many instructions are there in modern x86
processors?
These days it seems more practical to just list the number of x86
instruction set extensions than to count up just the individual
instructions themselves. But even the number of x86 instruction set
extensions are becoming unmanageable: x86-16, x86-32, x64, x87, MMX,
SSE, 3DNow, VT-X, AMD-V, AVX, AES, etc., etc.
I searched around just looking for a simple count of instructions,
and I couldn't find them.
Yousuf Khan
Looking briefly at http://ref.x86asm.net/ and http://www.sandpile.org/
gives me the impression that the *isn't* a simple count of
instructions.

The first was pretty confusing, but it offered a manual for $20
involving a table in XML that might be a useful way to track it down.
Or maybe nit...

The second lists instructions in several subsets and I didn't see a way
to find a combined list.

I'd suggest making a spreadsheet, and using sandpile to fill some cells
with numbers that you can then easily sum :-)

Suddenly I'm glad that I don't code in Intel asm any more ;-)

To be honest, I vaguely recall that it was never easy, even well before
the proliferation of instruction sets[1], to get such a count.

[1] My Intel asm experience was a pretty long time ago!
--
Gene E. Bloch (Stumbling Bloch)
Gene E. Bloch
2014-02-21 01:26:49 UTC
Permalink
Post by Gene E. Bloch
Looking briefly at http://ref.x86asm.net/ and
http://www.sandpile.org/ gives me the impression that the *isn't* a
^^^
there
Post by Gene E. Bloch
simple count of instructions.
The first was pretty confusing, but it offered a manual for $20
involving a table in XML that might be a useful way to track it down.
Or maybe nit...
Or maybe not.

I am aware that my spell checker is sometimes quite generous in
allowing semantic errors, so I have no excuse for letting those errors
get away from me.

But if they amused you, so much the better :-)
--
Gene E. Bloch (Stumbling Bloch)
Gene E. Bloch
2014-02-21 01:29:25 UTC
Permalink
Post by Gene E. Bloch
Post by Gene E. Bloch
Looking briefly at http://ref.x86asm.net/ and
http://www.sandpile.org/ gives me the impression that the *isn't* a
^^^
there
Post by Gene E. Bloch
simple count of instructions.
The first was pretty confusing, but it offered a manual for $20
involving a table in XML that might be a useful way to track it
down. Or maybe nit...
Or maybe not.
I am aware that my spell checker is sometimes quite generous in
allowing semantic errors, so I have no excuse for letting those
errors get away from me.
But if they amused you, so much the better :-)
It's still pretty funny.

So much for my skill at ASCII art, or ASCII errata corrections.

"impression that *there* isn't a simple count ..."
--
Gene E. Bloch (Stumbling Bloch)
Gene E. Bloch
2014-02-21 01:35:04 UTC
Permalink
Post by Gene E. Bloch
Post by Yousuf Khan
I was asked this question recently, and I just realized that I
really don't know the answer to this. I may have known at one time,
but I don't anymore, as things have moved on since I last used to
do assembly programming. How many instructions are there in modern
x86 processors?
These days it seems more practical to just list the number of x86
instruction set extensions than to count up just the individual
instructions themselves. But even the number of x86 instruction set
extensions are becoming unmanageable: x86-16, x86-32, x64, x87,
MMX, SSE, 3DNow, VT-X, AMD-V, AVX, AES, etc., etc.
I searched around just looking for a simple count of instructions,
and I couldn't find them.
Yousuf Khan
Looking briefly at http://ref.x86asm.net/ and
http://www.sandpile.org/ gives me the impression that the *isn't* a
simple count of instructions.
The first was pretty confusing, but it offered a manual for $20
involving a table in XML that might be a useful way to track it down.
Or maybe nit...
The second lists instructions in several subsets and I didn't see a
way to find a combined list.
I'd suggest making a spreadsheet, and using sandpile to fill some
cells with numbers that you can then easily sum :-)
Suddenly I'm glad that I don't code in Intel asm any more ;-)
To be honest, I vaguely recall that it was never easy, even well
before the proliferation of instruction sets[1], to get such a count.
[1] My Intel asm experience was a pretty long time ago!
There are a lot of manuals here[1]:

http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html

AKA http://tinyurl.com/3lh7em3

They are downloadable PDFs in several configurations...but I bet they
won't have a unified table either :-(

[1] And you've probably already been there...
--
Gene E. Bloch (Stumbling Bloch)
Yousuf Khan
2014-02-21 02:33:16 UTC
Permalink
Post by Gene E. Bloch
Looking briefly at http://ref.x86asm.net/ and http://www.sandpile.org/
gives me the impression that the *isn't* a simple count of instructions.
Yeah, I looked at some of those sites already, and that was my
impression too, that the instructions aren't easy to count. It doesn't
help that Intel and AMD have their own extensions, either.

But it goes to show why the age of compilers is well and truly upon us,
there's no human way to keep track of these machine language
instructions. Compilers just use a subset, and just repeat those
instructions over and over again.

Yousuf Khan
Gene E. Bloch
2014-02-21 03:46:08 UTC
Permalink
Post by Yousuf Khan
Post by Gene E. Bloch
Looking briefly at http://ref.x86asm.net/ and
http://www.sandpile.org/
gives me the impression that the *isn't* a simple count of
instructions.
Yeah, I looked at some of those sites already, and that was my
impression too, that the instructions aren't easy to count. It
doesn't help that Intel and AMD have their own extensions, either.
But it goes to show why the age of compilers is well and truly upon
us, there's no human way to keep track of these machine language
instructions. Compilers just use a subset, and just repeat those
instructions over and over again.
Yousuf Khan
Maybe there are too many instructions (seriously).

But on the other hand, if I were writing video drivers (for moving
video), I'd want a specialized compiler that uses one subset of
instructions, and if I were writing heavy math software, I'd need
another subset in another specialized compiler...and so on.

None of the above is where I am these days :-)
--
Gene E. Bloch (Stumbling Bloch)
Gene E. Bloch
2014-02-21 04:02:26 UTC
Permalink
Post by Yousuf Khan
Post by Gene E. Bloch
Looking briefly at http://ref.x86asm.net/ and
http://www.sandpile.org/
gives me the impression that the *isn't* a simple count of
instructions.
Yeah, I looked at some of those sites already, and that was my
impression too, that the instructions aren't easy to count. It
doesn't help that Intel and AMD have their own extensions, either.
But it goes to show why the age of compilers is well and truly upon
us, there's no human way to keep track of these machine language
instructions. Compilers just use a subset, and just repeat those
instructions over and over again.
Yousuf Khan
Maybe there are too many instructions (seriously).

But on the other hand, if I were writing video drivers (for moving
video), I'd want a specialized compiler that uses one subset of
instructions, and if I were writing heavy math software, I'd need
another subset in another specialized compiler...and so on.

None of the above is where I am these days :-)

<COMMENT>
I am reposting this. I sent it about 15 min ago, and it is now shown as
removed from the server. Perhaps I have offended the Usenet gods.
</COMMENT>
--
Gene E. Bloch (Stumbling Bloch)
Paul
2014-02-21 04:21:33 UTC
Permalink
Post by Yousuf Khan
Post by Gene E. Bloch
Looking briefly at http://ref.x86asm.net/ and http://www.sandpile.org/
gives me the impression that the *isn't* a simple count of instructions.
Yeah, I looked at some of those sites already, and that was my
impression too, that the instructions aren't easy to count. It doesn't
help that Intel and AMD have their own extensions, either.
But it goes to show why the age of compilers is well and truly upon us,
there's no human way to keep track of these machine language
instructions. Compilers just use a subset, and just repeat those
instructions over and over again.
Yousuf Khan
Actually, even the compiler writers are getting
tired of the expanding instruction set. (I read
a rant on the topic.) Intel can make new instructions
faster than those guys can find a use for them.

At one time, a compiler would issue instructions
from about 30% of the instruction set. It would mean
a compiled program would never emit the other 70% of
them. But a person writing assembler code, would
have access to all of them, at least, as long as
the mnemonic existed in the assembler.

I worked on a couple of 8 bit micros, and at the
time, you could get a fold-out card (about a foot long,
double sided), with all the instructions on it. And
that's what we'd use as a quick reference when picking
instructions. You can't do that now, because the
fold-out card would be a hundred feet long. It
was a sign you were a "real programmer", when the
local rep gave you your fold-out card :-) LOL.

Paul
Gene E. Bloch
2014-02-21 05:23:02 UTC
Permalink
Post by Paul
Post by Yousuf Khan
Post by Gene E. Bloch
Looking briefly at http://ref.x86asm.net/ and
http://www.sandpile.org/
gives me the impression that the *isn't* a simple count of
instructions.
Yeah, I looked at some of those sites already, and that was my
impression too, that the instructions aren't easy to count. It
doesn't help that Intel and AMD have their own extensions, either.
But it goes to show why the age of compilers is well and truly upon
us, there's no human way to keep track of these machine language
instructions. Compilers just use a subset, and just repeat those
instructions over and over again.
Yousuf Khan
Actually, even the compiler writers are getting
tired of the expanding instruction set. (I read
a rant on the topic.) Intel can make new instructions
faster than those guys can find a use for them.
At one time, a compiler would issue instructions
from about 30% of the instruction set. It would mean
a compiled program would never emit the other 70% of
them. But a person writing assembler code, would
have access to all of them, at least, as long as
the mnemonic existed in the assembler.
And was somehow accessible to the mind of the programmer :-)
Post by Paul
I worked on a couple of 8 bit micros, and at the
time, you could get a fold-out card (about a foot long,
double sided), with all the instructions on it. And
that's what we'd use as a quick reference when picking
instructions. You can't do that now, because the
fold-out card would be a hundred feet long. It
was a sign you were a "real programmer", when the
local rep gave you your fold-out card :-) LOL.
Paul
In my assembly language days, the fold-out card was pretty damn small
:-)

I remember writing some code to move a block of memory in 286 (I think)
days, and later I realized how badly I had set it up. I didn't take
proper advantage of the way the 20-bit addressing worked[1], so I made
the call unusual and the code klutzy. I required the caller to address
memory to the byte level, i.e., all 20 bits, instead of to the
high-order 16 bits. Flexible but silly.

[1] Because I didn't fully understand the usage conventions yet.
--
Gene E. Bloch (Stumbling Bloch)
Yousuf Khan
2014-02-21 05:55:02 UTC
Permalink
Post by Paul
At one time, a compiler would issue instructions
from about 30% of the instruction set. It would mean
a compiled program would never emit the other 70% of
them. But a person writing assembler code, would
have access to all of them, at least, as long as
the mnemonic existed in the assembler.
I think the original idea of the x86's large instruction count was to
make an assembly language as full-featured as a high-level language. x86
even had string-handling instructions!

I remember I designed an early version of the CPUID program that ran
under DOS. The whole executable including its *.exe headers was
something like 40 bytes! Got it down to under 20 bytes when I converted
it to *.com (which had no headers)! Most of the space was used to store
strings, like "This processor is a:" followed by generated strings like
386SX or 486DX, etc. :)

You could make some really tiny assembler programs on x86. Of course,
compiled programs ignored most of these useful high-level instructions
and stuck with simple instructions to do everything.

Yousuf Khan
Stanley Daniel de Liver
2014-04-25 09:54:07 UTC
Permalink
On Fri, 21 Feb 2014 05:55:02 -0000, Yousuf Khan
Post by Yousuf Khan
Post by Paul
At one time, a compiler would issue instructions
from about 30% of the instruction set. It would mean
a compiled program would never emit the other 70% of
them. But a person writing assembler code, would
have access to all of them, at least, as long as
the mnemonic existed in the assembler.
I think the original idea of the x86's large instruction count was to
make an assembly language as full-featured as a high-level language. x86
even had string-handling instructions!
I remember I designed an early version of the CPUID program that ran
under DOS. The whole executable including its *.exe headers was
something like 40 bytes! Got it down to under 20 bytes when I converted
it to *.com (which had no headers)! Most of the space was used to store
strings, like "This processor is a:" followed by generated strings like
386SX or 486DX, etc. :)
You could make some really tiny assembler programs on x86. Of course,
compiled programs ignored most of these useful high-level instructions
and stuck with simple instructions to do everything.
Yousuf Khan
Did you cater for all the early cpus?

;This code assembles under nasm as 105 bytes of machine code, and will
;return the following values in ax:
;
;AX CPU
;0 8088 (NMOS)
;1 8086 (NMOS)
;2 8088 (CMOS)
;3 8086 (CMOS)
;4 NEC V20
;5 NEC V30
;6 80188
;7 80186
;8 286
;0Ah 386 and higher

code segment
assume cs:code,ds:code
.radix 16
org 100

mov ax,1
mov cx,32
shl ax,cl
jnz x186

;pusha
db '60'
stc
jc nec

mov ax,cs
add ax,01000h
mov es,ax
xor si,si
mov di,100h
mov cx,08000h
;rep es movsb
rep es:movsb
or cx,cx
jz cmos
nmos:
mov ax,0
jmp x8_16
cmos:
mov ax,2
jmp x8_16
nec:
mov ax,4
jmp x8_16
x186:
push sp
pop ax
cmp ax,sp
jz x286

mov ax,6
x8_16:
xor bx,bx
mov byte [a1],043h
a1 label byte
nop
or bx,bx
jnz t1
or bx,1
t1:
jmp cpuid_end
x286:
pushf
pop ax
or ah,070h
push ax
popf
pushf
pop ax
and ax,07000h
jnz x386

mov ax,8
jmp cpuid_end
x386:
mov ax,0Ah

cpuid_end:


code ends

end
--
It's a money /life balance.
Yousuf Khan
2014-04-26 00:58:41 UTC
Permalink
Post by Stanley Daniel de Liver
On Fri, 21 Feb 2014 05:55:02 -0000, Yousuf Khan
Post by Yousuf Khan
I remember I designed an early version of the CPUID program that ran
under DOS. The whole executable including its *.exe headers was
something like 40 bytes! Got it down to under 20 bytes when I
converted it to *.com (which had no headers)! Most of the space was
used to store strings, like "This processor is a:" followed by
generated strings like 386SX or 486DX, etc. :)
You could make some really tiny assembler programs on x86. Of course,
compiled programs ignored most of these useful high-level instructions
and stuck with simple instructions to do everything.
Yousuf Khan
Did you cater for all the early cpus?
;This code assembles under nasm as 105 bytes of machine code, and will
;
;AX CPU
;0 8088 (NMOS)
;1 8086 (NMOS)
;2 8088 (CMOS)
;3 8086 (CMOS)
;4 NEC V20
;5 NEC V30
;6 80188
;7 80186
;8 286
;0Ah 386 and higher
I don't know if I still have my old program anymore, but I do remember
at that time it could distinguish 386SX from DX and 486SX from DX as well.

Yousuf Khan
Stanley Daniel de Liver
2014-04-26 10:29:56 UTC
Permalink
On Sat, 26 Apr 2014 01:58:41 +0100, Yousuf Khan
Post by Yousuf Khan
Post by Stanley Daniel de Liver
On Fri, 21 Feb 2014 05:55:02 -0000, Yousuf Khan
Post by Yousuf Khan
I remember I designed an early version of the CPUID program that ran
under DOS. The whole executable including its *.exe headers was
something like 40 bytes! Got it down to under 20 bytes when I
converted it to *.com (which had no headers)! Most of the space was
used to store strings, like "This processor is a:" followed by
generated strings like 386SX or 486DX, etc. :)
I doubt the minimalism; a print rtn is 6 bytes, and the text "This
processor is a:" is 20 on it's own!
Post by Yousuf Khan
Post by Stanley Daniel de Liver
Post by Yousuf Khan
You could make some really tiny assembler programs on x86. Of course,
compiled programs ignored most of these useful high-level instructions
and stuck with simple instructions to do everything.
Yousuf Khan
Did you cater for all the early cpus?
;This code assembles under nasm as 105 bytes of machine code, and will
;
;AX CPU
;0 8088 (NMOS)
;1 8086 (NMOS)
;2 8088 (CMOS)
;3 8086 (CMOS)
;4 NEC V20
;5 NEC V30
;6 80188
;7 80186
;8 286
;0Ah 386 and higher
(this wasn't my code, I probably had it from clax some years back)
Post by Yousuf Khan
I don't know if I still have my old program anymore, but I do remember
at that time it could distinguish 386SX from DX and 486SX from DX as well.
Yousuf Khan
Here's the routine I boiled it down to:
test_cpu:
; mikes shorter test for processor
mov ax,07000h
push ax
popf
sti
pushf
pop ax
and ah,0C0h ; isolate top 2 bits
shr ah,1 ; avoid negative
cmp ah,020h
; anything greater means 8086 - but 80 =-1!
; anything less means bit 4 off, i.e 286
; equal implies 386
ret

of course when the CPUID instruction was introduced it made the later
chips much easier to identify!
--
It's a money /life balance.
Robert Redelmeier
2014-02-21 14:23:01 UTC
Permalink
Post by Yousuf Khan
But it goes to show why the age of compilers is well and
truly upon us, there's no human way to keep track of these
machine language instructions. Compilers just use a subset,
and just repeat those instructions over and over again.
Hate to break it to you, but you are behind the times. Compilers
are passe' -- "modern" systems use interpreters like JIT Java.

How else you you think Android gets Apps to run on the dogs-breakfast
of ARM processors out there? It is [nearly] all interpreted Java.
So much so that Dell can get 'roid Apps to run on its x86 tablet!
(AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)


-- Robert
Yousuf Khan
2014-02-21 19:15:42 UTC
Permalink
Post by Robert Redelmeier
Post by Yousuf Khan
But it goes to show why the age of compilers is well and
truly upon us, there's no human way to keep track of these
machine language instructions. Compilers just use a subset,
and just repeat those instructions over and over again.
Hate to break it to you, but you are behind the times. Compilers
are passe' -- "modern" systems use interpreters like JIT Java.
How else you you think Android gets Apps to run on the dogs-breakfast
of ARM processors out there? It is [nearly] all interpreted Java.
So much so that Dell can get 'roid Apps to run on its x86 tablet!
(AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
Apparently, even Java byte code is compiled before it is run on a
different type of virtual machine than its own Java VM. Can't use Java
directly on Android:

"There is no Java Virtual Machine in the Android platform. Java bytecode
is not executed. Instead Java classes are compiled into a proprietary
bytecode format and run on Dalvik, a specialized virtual machine (VM)
designed specifically for Android. Unlike Java VMs, which are stack
machines, the Dalvik VM is a register-based architecture.

Because the bytecode loaded by the Dalvik virtual machine is not Java
bytecode, and of the specific way Dalvik load classes, it is not
possible to load Java libraries packages as jar files, and even a
specific logic must be used to load Android libraries (specifically the
content of the underlying dex file must be copied in the application
private internal storage area, before being able to be loaded).[2]"

Comparison of Java and Android API - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Comparison_of_Java_and_Android_API
Gene E. Bloch
2014-02-21 19:34:59 UTC
Permalink
Post by Yousuf Khan
In comp.sys.ibm.pc.hardware.chips Yousuf Khan
Post by Yousuf Khan
But it goes to show why the age of compilers is well and
truly upon us, there's no human way to keep track of these
machine language instructions. Compilers just use a subset,
and just repeat those instructions over and over again.
Hate to break it to you, but you are behind the times. Compilers
are passe' -- "modern" systems use interpreters like JIT Java.
How else you you think Android gets Apps to run on the
dogs-breakfast
of ARM processors out there? It is [nearly] all interpreted Java.
So much so that Dell can get 'roid Apps to run on its x86 tablet!
(AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
Apparently, even Java byte code is compiled before it is run on a
different type of virtual machine than its own Java VM. Can't use
"There is no Java Virtual Machine in the Android platform. Java
bytecode is not executed. Instead Java classes are compiled into a
proprietary bytecode format and run on Dalvik, a specialized virtual
machine (VM) designed specifically for Android. Unlike Java VMs,
which are stack machines, the Dalvik VM is a register-based
architecture.
Because the bytecode loaded by the Dalvik virtual machine is not Java
bytecode, and of the specific way Dalvik load classes, it is not
possible to load Java libraries packages as jar files, and even a
specific logic must be used to load Android libraries (specifically
the content of the underlying dex file must be copied in the
application private internal storage area, before being able to be
loaded).[2]"
Comparison of Java and Android API - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Comparison_of_Java_and_Android_API
IMO, that doesn't invalidate the point made by Robert Redelmeier; the
Java VM is one example of his point, but to me, the Dalvik VM is just
another (related) example.

BTW, I see lots of EXE files and very few JAR file in my program file
directories: I don't fully agree with Robert Redelmeier at all.

Of course, my opinion also doesn't invalidate his point - or yours :-)

Except in my opinion...
--
Gene E. Bloch (Stumbling Bloch)
charlie
2014-02-23 15:14:09 UTC
Permalink
Post by Gene E. Bloch
Post by Yousuf Khan
In comp.sys.ibm.pc.hardware.chips Yousuf Khan
Post by Yousuf Khan
But it goes to show why the age of compilers is well and
truly upon us, there's no human way to keep track of these
machine language instructions. Compilers just use a subset,
and just repeat those instructions over and over again.
Hate to break it to you, but you are behind the times. Compilers
are passe' -- "modern" systems use interpreters like JIT Java.
How else you you think Android gets Apps to run on the dogs-breakfast
of ARM processors out there? It is [nearly] all interpreted Java.
So much so that Dell can get 'roid Apps to run on its x86 tablet!
(AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
Apparently, even Java byte code is compiled before it is run on a
different type of virtual machine than its own Java VM. Can't use Java
"There is no Java Virtual Machine in the Android platform. Java
bytecode is not executed. Instead Java classes are compiled into a
proprietary bytecode format and run on Dalvik, a specialized virtual
machine (VM) designed specifically for Android. Unlike Java VMs, which
are stack machines, the Dalvik VM is a register-based architecture.
Because the bytecode loaded by the Dalvik virtual machine is not Java
bytecode, and of the specific way Dalvik load classes, it is not
possible to load Java libraries packages as jar files, and even a
specific logic must be used to load Android libraries (specifically
the content of the underlying dex file must be copied in the
application private internal storage area, before being able to be
loaded).[2]"
Comparison of Java and Android API - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Comparison_of_Java_and_Android_API
IMO, that doesn't invalidate the point made by Robert Redelmeier; the
Java VM is one example of his point, but to me, the Dalvik VM is just
another (related) example.
BTW, I see lots of EXE files and very few JAR file in my program file
directories: I don't fully agree with Robert Redelmeier at all.
Of course, my opinion also doesn't invalidate his point - or yours :-)
Except in my opinion...
You old timers should love this one!

Back in the late 80's we got into a real time response situation that
was caused by code development using a then popular and "mil certified"
compiler. The resulting code was horrible in terms of speed. It was so
bad that the the military decided to fund a project to develop a "code
checker" that analyzed compiler output code for all kinds of issues.
One of the first results was that the compilers of the time did not
begin to utilize the processor's capabilities. Very limited percentages
of available instruction sets were used.

At the time, the only out we had in order to meet contract requirements
was to write a combination of assembly code, compiled code, and horrors,
machine code. If that wasn't bad enough, we then had to "disassemble"
the machine code to see if there was a way to duplicate it at the
highest level possible, without writing compiler extensions.

The whole thing happened because the end product had microprocessors
controlling various parts of a system, and they had to share resources,
common memory, have both a hierarchical and a random interrupt
capability, and be able to execute tasking in specific short time
frames. ECCH!

(When somebody shoots a missile at your rear, there isn't a lot of time
to go about doing something about it)!
J. P. Gilliver (John)
2014-02-23 16:37:28 UTC
Permalink
In message <5foOu.2965$***@fx21.iad>, charlie <***@msn.com>
writes:
[]
Post by charlie
At the time, the only out we had in order to meet contract requirements
was to write a combination of assembly code, compiled code, and
horrors,
machine code. If that wasn't bad enough, we then had to "disassemble"
the machine code to see if there was a way to duplicate it at the
highest level possible, without writing compiler extensions.
What's machine code (as opposed to assembly code) in this context? How
did you write it?
--
J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)***@T+H+Sh0!:`)DNAf

(If you are unlucky you may choose one of the old-fashioned ones [language
schools] and be taught English as it should be, and not as it is, spoken.)
George Mikes, "How to be Decadent" (1977).
charlie
2014-02-23 22:41:51 UTC
Permalink
Post by J. P. Gilliver (John)
[]
Post by charlie
At the time, the only out we had in order to meet contract
requirements was to write a combination of assembly code, compiled
code, and horrors,
machine code. If that wasn't bad enough, we then had to "disassemble"
the machine code to see if there was a way to duplicate it at the
highest level possible, without writing compiler extensions.
What's machine code (as opposed to assembly code) in this context? How
did you write it?
Assembly code (source) is just that, and compiled or changed to machine
code at some point. "Dis-assembly" converts machine code back to
Assembly code. (When the assembler understands the code, which may not
always be the case)
Machine code may be "relocatable", or be tied to memory locations.
Machine code can be the output of the assembler or loader in some cases.

A more complete explanation can be found at
http://en.wikipedia.org/wiki/Machine_code

The front panel on many of the old mainframes and minicomputers allowed
direct entry of machine code, and was usually used to manually enter
such things as a "bootstrap", or loader program.
BillW50
2014-02-23 23:15:24 UTC
Permalink
Post by charlie
The front panel on many of the old mainframes and minicomputers allowed
direct entry of machine code, and was usually used to manually enter
such things as a "bootstrap", or loader program.
The way I recall is any computer only understands machine code and
nothing else. Anything else must be converted to machine at some point.
--
Bill
Gateway M465e ('06 era) - Thunderbird v24.3.0
Centrino Core2 Duo T7400 2.16 GHz - 4GB - Windows 8 Pro w/Media Center
Yousuf Khan
2014-02-24 00:30:17 UTC
Permalink
Post by BillW50
Post by charlie
The front panel on many of the old mainframes and minicomputers allowed
direct entry of machine code, and was usually used to manually enter
such things as a "bootstrap", or loader program.
The way I recall is any computer only understands machine code and
nothing else. Anything else must be converted to machine at some point.
I know what Charlie is talking about. When he talks about directly
entering machine code, it means typing in the binary codes directly,
even without niceness of an assembler to translate it partially into
English readable. This would be entering numbers into memory directly,
like 0x2C, 0x01, 0xFB, etc., etc.

Yousuf Khan
Gene E. Bloch
2014-02-24 20:11:05 UTC
Permalink
Post by Yousuf Khan
Post by BillW50
Post by charlie
The front panel on many of the old mainframes and minicomputers allowed
direct entry of machine code, and was usually used to manually enter
such things as a "bootstrap", or loader program.
The way I recall is any computer only understands machine code and
nothing else. Anything else must be converted to machine at some point.
I know what Charlie is talking about. When he talks about directly
entering machine code, it means typing in the binary codes directly,
even without niceness of an assembler to translate it partially into
English readable. This would be entering numbers into memory
directly, like 0x2C, 0x01, 0xFB, etc., etc.
Yousuf Khan
Not so recently, when I worked on what were then called minicomputers,
the boot process went like this:

Set the front panel data switches to the bits of the first loader
instruction (in machine language, of course)
Set the front panel address switches to the first location of the
loader
Enter the data into memory by pressing the Store button.

Set the data switches to the second instruction and the address
switches to the second address. Press Store.

Repeat a dozen or two times to get the entire bootstrap loader into
memory

Load the main loader paper tape into the paper tape reader

Set the address switches to the starting location of the boot strap
loader

Press the Go button

When to main loader is in, load the paper tape of the program you want
to run into the reader

Set the starting address to the main loader's first address

Press Go

That loader will load the final paper tape automatically, thank Silicon

Over time the process was streamlined a bit, for example by letting the
storage address autoincrement after each Store operation.

Maybe you can guess how happy I was when BIOSes started to appear :-)
--
Gene E. Bloch (Stumbling Bloch)
Jason
2014-02-24 23:41:43 UTC
Permalink
On Mon, 24 Feb 2014 12:11:05 -0800 "Gene E. Bloch"
<***@someplace.invalid> wrote in article <leg90r$nln$1
@news.albasani.net>
Post by Gene E. Bloch
Post by Yousuf Khan
Post by BillW50
Post by charlie
The front panel on many of the old mainframes and minicomputers allowed
direct entry of machine code, and was usually used to manually enter
such things as a "bootstrap", or loader program.
The way I recall is any computer only understands machine code and
nothing else. Anything else must be converted to machine at some point.
I know what Charlie is talking about. When he talks about directly
entering machine code, it means typing in the binary codes directly,
even without niceness of an assembler to translate it partially into
English readable. This would be entering numbers into memory
directly, like 0x2C, 0x01, 0xFB, etc., etc.
Yousuf Khan
Not so recently, when I worked on what were then called minicomputers,
Set the front panel data switches to the bits of the first loader
instruction (in machine language, of course)
Set the front panel address switches to the first location of the
loader
Enter the data into memory by pressing the Store button.
Set the data switches to the second instruction and the address
switches to the second address. Press Store.
Repeat a dozen or two times to get the entire bootstrap loader into
memory
Load the main loader paper tape into the paper tape reader
Set the address switches to the starting location of the boot strap
loader
Press the Go button
When to main loader is in, load the paper tape of the program you want
to run into the reader
Set the starting address to the main loader's first address
Press Go
That loader will load the final paper tape automatically, thank Silicon
Over time the process was streamlined a bit, for example by letting the
storage address autoincrement after each Store operation.
Maybe you can guess how happy I was when BIOSes started to appear :-)
lol I'm sure you were! The first computer I used had the boot record on a
single tab card. It used up about 75 of the 80 columns. We whipersnappers
memorized the sequence and could type it in on the console
teletypewriter. It was faster than tracking down the boot card sometimes.
k***@attt.bizz
2014-02-24 00:34:17 UTC
Permalink
Post by BillW50
Post by charlie
The front panel on many of the old mainframes and minicomputers allowed
direct entry of machine code, and was usually used to manually enter
such things as a "bootstrap", or loader program.
The way I recall is any computer only understands machine code and
nothing else. Anything else must be converted to machine at some point.
That sorta the meaning of the word "machine" in "machine code". ;-)

The issue is how the programs are stored, in the mean time. If the
machine code is never "seen" in the wild, it's an interpreter. If the
machine code is stored somewhere it's either "assembled" or
"compiled". The major difference being that an "assembled" program
has a 1:1 correspondence to its machine code, a "compiled" program
will not. Of course a "macro" assembler confuses this point some.
charlie
2014-02-24 09:42:42 UTC
Permalink
Post by k***@attt.bizz
machine code is stored somewhere it's either "assembled" or
Post by k***@attt.bizz
"compiled".
There's more! A "Loader" can take a binary type file and add it to
memory. If the loader has a system level "map" of memory usage, and
resident code entries and exits, it can load the code at a relative or
absolute memory location, and inform the system level software where it
is. Or it might do a "load and go" so that when the loader is finished,
the processor goes to and starts executing at an address provided by the
loader. A system might tell the loader where in memory to put the code.
A programmer's nightmare is intermixed code and data, with self
modifying code added, just for giggles! Some compilers/assemblers used
to generate machine code had/have detectable signatures tracing back to
the particular development software that was used. This allowed authors
to check to see if they were being properly paid for use of their
development software. (Freeware or student development software, pay for
commercial use) I'd suggest that you don't consider use of "student" or
"educational" development software to develop a commercial program!
Post by k***@attt.bizz
Post by k***@attt.bizz
Post by charlie
The front panel on many of the old mainframes and minicomputers allowed
direct entry of machine code, and was usually used to manually enter
such things as a "bootstrap", or loader program.
The way I recall is any computer only understands machine code and
nothing else. Anything else must be converted to machine at some point.
That sorta the meaning of the word "machine" in "machine code". ;-)
The issue is how the programs are stored, in the mean time. If the
machine code is never "seen" in the wild, it's an interpreter. If the
machine code is stored somewhere it's either "assembled" or
"compiled". The major difference being that an "assembled" program
has a 1:1 correspondence to its machine code, a "compiled" program
will not. Of course a "macro" assembler confuses this point some.
Gene E. Bloch
2014-02-23 23:45:55 UTC
Permalink
Post by J. P. Gilliver (John)
[]
Post by charlie
At the time, the only out we had in order to meet contract
requirements was to write a combination of assembly code, compiled
code, and horrors,
machine code. If that wasn't bad enough, we then had to
"disassemble"
the machine code to see if there was a way to duplicate it at the
highest level possible, without writing compiler extensions.
What's machine code (as opposed to assembly code) in this context?
How did you write it?
This might help:

When I owned an Apple ][, for a long time I din't own an assembler
program. I wrote some code in hex...

Let me tell you, "a small change" was a complete oxymoron.

"Machine code" means the actual bits or bytes that go into memory.
"Assembly code" is a *symbolic* language. Assembly language code, for
various reasons, might not even be a perfect 1 to 1 match to what goes
into the machine.
--
Gene E. Bloch (Stumbling Bloch)
BillW50
2014-02-23 23:49:34 UTC
Permalink
Post by Gene E. Bloch
Post by J. P. Gilliver (John)
[]
Post by charlie
At the time, the only out we had in order to meet contract
requirements was to write a combination of assembly code, compiled
code, and horrors,
machine code. If that wasn't bad enough, we then had to "disassemble"
the machine code to see if there was a way to duplicate it at the
highest level possible, without writing compiler extensions.
What's machine code (as opposed to assembly code) in this context? How
did you write it?
When I owned an Apple ][, for a long time I din't own an assembler
program. I wrote some code in hex...
Let me tell you, "a small change" was a complete oxymoron.
"Machine code" means the actual bits or bytes that go into memory.
"Assembly code" is a *symbolic* language. Assembly language code, for
various reasons, might not even be a perfect 1 to 1 match to what goes
into the machine.
+1
--
Bill
Gateway M465e ('06 era) - Thunderbird v24.3.0
Centrino Core2 Duo T7400 2.16 GHz - 4GB - Windows 8 Pro w/Media Center
Char Jackson
2014-02-22 01:03:11 UTC
Permalink
Post by Yousuf Khan
Post by Robert Redelmeier
Post by Yousuf Khan
But it goes to show why the age of compilers is well and
truly upon us, there's no human way to keep track of these
machine language instructions. Compilers just use a subset,
and just repeat those instructions over and over again.
Hate to break it to you, but you are behind the times. Compilers
are passe' -- "modern" systems use interpreters like JIT Java.
How else you you think Android gets Apps to run on the dogs-breakfast
of ARM processors out there? It is [nearly] all interpreted Java.
So much so that Dell can get 'roid Apps to run on its x86 tablet!
(AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
Apparently, even Java byte code is compiled before it is run on a
different type of virtual machine than its own Java VM. Can't use Java
"There is no Java Virtual Machine in the Android platform. Java bytecode
is not executed. Instead Java classes are compiled into a proprietary
bytecode format and run on Dalvik, a specialized virtual machine (VM)
designed specifically for Android. Unlike Java VMs, which are stack
machines, the Dalvik VM is a register-based architecture.
Because the bytecode loaded by the Dalvik virtual machine is not Java
bytecode, and of the specific way Dalvik load classes, it is not
possible to load Java libraries packages as jar files, and even a
specific logic must be used to load Android libraries (specifically the
content of the underlying dex file must be copied in the application
private internal storage area, before being able to be loaded).[2]"
Comparison of Java and Android API - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Comparison_of_Java_and_Android_API
There has been some buzz in recent months about Dalvik's replacement, ART.
Art is apparently an "ahead of time" compiler, unlike Dalvik, which is "just
in time". Art is supposed to improve app performance and battery life, at
the expense of somewhat larger file sizes.

Sample article
http://lifehacker.com/android-art-vs-dalvik-runtimes-effect-on-battery-life-1507264545
--
Char Jackson
Robert Redelmeier
2014-02-22 02:16:26 UTC
Permalink
Post by Yousuf Khan
Post by Robert Redelmeier
Post by Yousuf Khan
But it goes to show why the age of compilers is well and
truly upon us, there's no human way to keep track of these
machine language instructions. Compilers just use a subset,
and just repeat those instructions over and over again.
Hate to break it to you, but you are behind the times. Compilers
are passe' -- "modern" systems use interpreters like JIT Java.
How else you you think Android gets Apps to run on the dogs-breakfast
of ARM processors out there? It is [nearly] all interpreted Java.
So much so that Dell can get 'roid Apps to run on its x86 tablet!
(AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
Apparently, even Java byte code is compiled before it is run on a
different type of virtual machine than its own Java VM. Can't use Java
"There is no Java Virtual Machine in the Android platform. Java bytecode
is not executed. Instead Java classes are compiled into a proprietary
bytecode format and run on Dalvik, a specialized virtual machine (VM)
designed specifically for Android. Unlike Java VMs, which are stack
machines, the Dalvik VM is a register-based architecture.
Because the bytecode loaded by the Dalvik virtual machine is not Java
bytecode, and of the specific way Dalvik load classes, it is not
possible to load Java libraries packages as jar files, and even a
specific logic must be used to load Android libraries (specifically the
content of the underlying dex file must be copied in the application
private internal storage area, before being able to be loaded).[2]"
Comparison of Java and Android API - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Comparison_of_Java_and_Android_API
Thanks you for the additional details. "precompiled" makes some sense
-- why waste all that time parsing ASCII? Dalvik would of course
have to be customized for the flavor of ARM it was installed on.

Dalvik being a register-based VM also makes some sense for
ARMs with more registers. x86 has a blazing fast data L1
that reduces the stack penalty, often to zero. I wonder how
Dell implemented Dalvik on the Venue?


-- Robert
Yousuf Khan
2014-02-22 02:32:49 UTC
Permalink
Post by Robert Redelmeier
Thanks you for the additional details. "precompiled" makes some sense
-- why waste all that time parsing ASCII? Dalvik would of course
have to be customized for the flavor of ARM it was installed on.
It's interesting how Java has become just another compiled language in
many cases these days.
Post by Robert Redelmeier
Dalvik being a register-based VM also makes some sense for
ARMs with more registers. x86 has a blazing fast data L1
that reduces the stack penalty, often to zero. I wonder how
Dell implemented Dalvik on the Venue?
X86 also has lots of registers to spare these days (thanks to x64), so a
register based VM should be pretty blazing fast on one of those too.

Yousuf Khan
Jason
2014-02-24 04:21:52 UTC
Permalink
On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier"
Post by Robert Redelmeier
Hate to break it to you, but you are behind the times. Compilers
are passe' -- "modern" systems use interpreters like JIT Java.
How else you you think Android gets Apps to run on the dogs-breakfast
of ARM processors out there? It is [nearly] all interpreted Java.
So much so that Dell can get 'roid Apps to run on its x86 tablet!
(AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
-- Robert
Compilers are NOT passe'

The performance penalty for interpreted languages is a large factor. It's
fine in many situations - scripting languages and the like - and the
modern processors are fast enough to make the performance hit tolerable.
Large-scale applications are still compiled and heavily optimized. Time
is money.
k***@attt.bizz
2014-02-24 18:02:02 UTC
Permalink
Post by Jason
On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier"
Post by Robert Redelmeier
Hate to break it to you, but you are behind the times. Compilers
are passe' -- "modern" systems use interpreters like JIT Java.
How else you you think Android gets Apps to run on the dogs-breakfast
of ARM processors out there? It is [nearly] all interpreted Java.
So much so that Dell can get 'roid Apps to run on its x86 tablet!
(AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
-- Robert
Compilers are NOT passe'
The performance penalty for interpreted languages is a large factor. It's
fine in many situations - scripting languages and the like - and the
modern processors are fast enough to make the performance hit tolerable.
Large-scale applications are still compiled and heavily optimized. Time
is money.
Time may be money but transistors are free. ;-)
Jason
2014-02-24 18:38:40 UTC
Permalink
Post by k***@attt.bizz
Post by Jason
On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier"
Post by Robert Redelmeier
Hate to break it to you, but you are behind the times. Compilers
are passe' -- "modern" systems use interpreters like JIT Java.
How else you you think Android gets Apps to run on the dogs-breakfast
of ARM processors out there? It is [nearly] all interpreted Java.
So much so that Dell can get 'roid Apps to run on its x86 tablet!
(AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
-- Robert
Compilers are NOT passe'
The performance penalty for interpreted languages is a large factor. It's
fine in many situations - scripting languages and the like - and the
modern processors are fast enough to make the performance hit tolerable.
Large-scale applications are still compiled and heavily optimized. Time
is money.
Time may be money but transistors are free. ;-)
Well, not exactly free. Visit a National Lab sometime to get an idea of
the magnitude of the expenditures for "free" transistors. I've been
there. Those people do everything to wring out every droplet of
performance that they can, even on petaflops machines.
k***@attt.bizz
2014-02-24 19:09:02 UTC
Permalink
Post by Jason
Post by k***@attt.bizz
Post by Jason
On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier"
Post by Robert Redelmeier
Hate to break it to you, but you are behind the times. Compilers
are passe' -- "modern" systems use interpreters like JIT Java.
How else you you think Android gets Apps to run on the dogs-breakfast
of ARM processors out there? It is [nearly] all interpreted Java.
So much so that Dell can get 'roid Apps to run on its x86 tablet!
(AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
-- Robert
Compilers are NOT passe'
The performance penalty for interpreted languages is a large factor. It's
fine in many situations - scripting languages and the like - and the
modern processors are fast enough to make the performance hit tolerable.
Large-scale applications are still compiled and heavily optimized. Time
is money.
Time may be money but transistors are free. ;-)
Well, not exactly free. Visit a National Lab sometime to get an idea of
the magnitude of the expenditures for "free" transistors. I've been
there. Those people do everything to wring out every droplet of
performance that they can, even on petaflops machines.
Now, divide that expenditure by the number manufactured. I worked in
high-end microprocessor design for seven or eight years. Transistors
are indeed treated as free, and getting cheaper every year. If you
look at how programmers write, they think they're free, too. ;-)
Jason
2014-02-24 21:35:43 UTC
Permalink
Post by k***@attt.bizz
Post by Jason
On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier"
Now, divide that expenditure by the number manufactured. I worked in
high-end microprocessor design for seven or eight years. Transistors
are indeed treated as free, and getting cheaper every year. If you
look at how programmers write, they think they're free, too. ;-)
Ok, transistors are indeed free in that regard. But as we've learned
there are limits to absolute performance that can be had even with an
unlimited transistor budget - hence multi-core machines. Programmers
would be very happy if we could have figured out how to continuously
boost uniprcessor performance but it cannot happen, at least with
silicon. Taking advantage of parallel processor, for most tasks, is very
hard.
Robert Redelmeier
2014-02-25 00:35:47 UTC
Permalink
Post by Jason
On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier"
Post by Robert Redelmeier
Post by Yousuf Khan
But it goes to show why the age of compilers is well and
truly upon us, there's no human way to keep track of these
machine language instructions. Compilers just use a subset,
and just repeat those instructions over and over again.
Hate to break it to you, but you are behind the times. Compilers
are passe' -- "modern" systems use interpreters like JIT Java.
How else you you think Android gets Apps to run on the dogs-breakfast
of ARM processors out there? It is [nearly] all interpreted Java.
So much so that Dell can get 'roid Apps to run on its x86 tablet!
(AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle)
Compilers are NOT passe'
I feel quoted-out-of-context. I was replying to Mr Khan (restored above)
that compiled languages were in turn being supplanted by interpreted.
Post by Jason
The performance penalty for interpreted languages is a large
factor. It's fine in many situations - scripting languages and
the like - and the modern processors are fast enough to make the
performance hit tolerable. Large-scale applications are still
compiled and heavily optimized. Time is money.
I am well aware of the perfomance penalty of interpreted languages
(I once programmed in APL/360) and that compiling has been
preferable for HPC. However, the differences between compilers
are reducing to the quality of their libraries, especially SIMD and
multi-threading. The flexibility of interpreters might have value.


-- Robert
John Doe
2014-04-25 03:33:04 UTC
Permalink
Post by Robert Redelmeier
Post by Jason
Post by Robert Redelmeier
Post by Yousuf Khan
But it goes to show why the age of compilers is well and
truly upon us, there's no human way to keep track of these
machine language instructions. Compilers just use a subset,
and just repeat those instructions over and over again.
Hate to break it to you, but you are behind the times.
Compilers are passe' -- "modern" systems use interpreters like
JIT Java.
How else you you think Android gets Apps to run on the
dogs-breakfast of ARM processors out there? It is [nearly]
all interpreted Java. So much so that Dell can get 'roid Apps
to run on its x86 tablet! (AFAIK, iOS still runs compiled Apps
prob'cuz Apple _hatez_ Oracle)
Compilers are NOT passe'
I feel quoted-out-of-context. I was replying to Mr Khan
(restored above) that compiled languages were in turn being
supplanted by interpreted.
Post by Jason
The performance penalty for interpreted languages is a large
factor. It's fine in many situations - scripting languages and
the like - and the modern processors are fast enough to make
the performance hit tolerable. Large-scale applications are
still compiled and heavily optimized. Time is money.
I am well aware of the perfomance penalty of interpreted
languages (I once programmed in APL/360) and that compiling has
been preferable for HPC. However, the differences between
compilers are reducing to the quality of their libraries,
especially SIMD and multi-threading. The flexibility of
interpreters might have value.
Not talking about commercial stuff, but...

I use speech and VC++. Speech activated scripting involves (what I
think is) an interpreted scripting language (Vocola) hooked into
NaturallySpeaking (DNS) speech recognition. Additionally, I'm
using a Windows system hook written in C++ that is compiled. The
systemwide hook is for a few numeric keypad key activated short
SendInput() scripts. The much more involved voice-activated
scripting is for a large number of longer scripts. It's a great
combination for making Windows dance. I would say it's cumbersome,
but I have the editors working efficiently here. Currently using
that to play Age of Empires 2 HD. Speech is on the one extreme. I
suppose assembly language would be on the other, but C++ is at
least compiled.

That has nothing to do with any mass of programmers, but it's
useful here and is a very wide range mess of programming for one
task.
Jim
2014-02-27 06:28:00 UTC
Permalink
Loading Image...
700 as of '10. AVX/AVX2/BMI1/BMI2/XOP/FMA3/FMA4/Post 32nm Processor
Instruction Extension (RDRAND and F16C) should put that over 800.
Loading...