View source code
Display the source code in core/simd.d from which this page was generated on github.
Report a bug
If you spot a problem with this page, click here to create a Bugzilla issue.
Improve this page
Quickly fork, edit online, and submit a pull request for this page. Requires a signed-in GitHub account. This works well for small changes. If you'd like to make larger changes you may want to consider using local clone.

Module core.simd

Builtin SIMD intrinsics

Functions

NameDescription
loadUnaligned(p) Load unaligned vector from address. This is a compiler intrinsic.
loadUnaligned(p) Load unaligned vector from address. This is a compiler intrinsic.
prefetch(address) Emit prefetch instruction.
prefetch(address) Emit prefetch instruction.
simd(op1, op2) Generate two operand instruction with XMM 128 bit operands.
simd(op1) Unary SIMD instructions.
simd(d)
simd(f)
simd(op1, op2) For instructions: CMPPD, CMPSS, CMPSD, CMPPS, PSHUFD, PSHUFHW, PSHUFLW, BLENDPD, BLENDPS, DPPD, DPPS, MPSADBW, PBLENDW, ROUNDPD, ROUNDPS, ROUNDSD, ROUNDSS
simd(op1) For instructions with the imm8 version: PSLLD, PSLLQ, PSLLW, PSRAD, PSRAW, PSRLD, PSRLQ, PSRLW, PSRLDQ, PSLLDQ
simd(op1, op2) Generate two operand instruction with XMM 128 bit operands.
simd(op1) Unary SIMD instructions.
simd(d)
simd(f)
simd(op1, op2) For instructions: CMPPD, CMPSS, CMPSD, CMPPS, PSHUFD, PSHUFHW, PSHUFLW, BLENDPD, BLENDPS, DPPD, DPPS, MPSADBW, PBLENDW, ROUNDPD, ROUNDPS, ROUNDSD, ROUNDSS
simd(op1) For instructions with the imm8 version: PSLLD, PSLLQ, PSLLW, PSRAD, PSRAW, PSRLD, PSRLQ, PSRLW, PSRLDQ, PSLLDQ
simd_sto(op1, op2) For "store" operations of the form: op1 op= op2
simd_sto(op1, op2) For "store" operations of the form: op1 op= op2
simd_stod(op1, op2)
simd_stod(op1, op2)
simd_stof(op1, op2)
simd_stof(op1, op2)
storeUnaligned(p, value) Store vector to unaligned address. This is a compiler intrinsic.
storeUnaligned(p, value) Store vector to unaligned address. This is a compiler intrinsic.

Enums

NameDescription
XMM XMM opcodes that conform to the following:
XMM XMM opcodes that conform to the following:

Aliases

NameTypeDescription
byte16 __vector function(byte[16])
byte32 Vector!(byte[32])
double2 __vector function(double[2])
double4 Vector!(double[4])
float4 __vector function(float[4])
float8 Vector!(float[8])
int4 __vector function(int[4])
int8 Vector!(int[8])
long2 __vector function(long[2])
long4 Vector!(long[4])
short16 Vector!(short[16])
short8 __vector function(short[8])
ubyte16 __vector function(ubyte[16])
ubyte32 Vector!(ubyte[32])
uint4 __vector function(uint[4])
uint8 Vector!(uint[8])
ulong2 __vector function(ulong[2])
ulong4 Vector!(ulong[4])
ushort16 Vector!(ushort[16])
ushort8 __vector function(ushort[8])
Vector __vector function(T) Create a vector type.
void16 __vector function(void[16])
void32 Vector!(void[32])

Authors

Walter Bright,

License

Boost License 1.0.