Opcode/Instruction | Op /En | 64/32 bit Mode Support | CPUID Feature Flag | Description |
---|---|---|---|---|
0F C6 /r ib SHUFPS xmm1, xmm3/m128, imm8 |
RMI | V/V | SSE | Select from quadruplet of single-precision floating-point values in xmm1 and xmm2/m128 using imm8, interleaved result pairs are stored in xmm1. |
VEX.NDS.128.0F.WIG C6 /r ib VSHUFPS xmm1, xmm2, xmm3/m128, imm8 |
RVMI | V/V | AVX | Select from quadruplet of single-precision floating-point values in xmm1 and xmm2/m128 using imm8, interleaved result pairs are stored in xmm1. |
VEX.NDS.256.0F.WIG C6 /r ib VSHUFPS ymm1, ymm2, ymm3/m256, imm8 |
RVMI | V/V | AVX | Select from quadruplet of single-precision floating-point values in ymm2 and ymm3/m256 using imm8, interleaved result pairs are stored in ymm1. |
EVEX.NDS.128.0F.W0 C6 /r ib VSHUFPS xmm1{k1}{z}, xmm2, xmm3/m128/m32bcst, imm8 |
FV | V/V | AVX512VL AVX512F | Select from quadruplet of single-precision floating-point values in xmm1 and xmm2/m128 using imm8, interleaved result pairs are stored in xmm1, subject to writemask k1. |
EVEX.NDS.256.0F.W0 C6 /r ib VSHUFPS ymm1{k1}{z}, ymm2, ymm3/m256/m32bcst, imm8 |
FV | V/V | AVX512VL AVX512F | Select from quadruplet of single-precision floating-point values in ymm2 and ymm3/m256 using imm8, interleaved result pairs are stored in ymm1, subject to writemask k1. |
EVEX.NDS.512.0F.W0 C6 /r ib VSHUFPS zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst, imm8 |
FV | V/V | AVX512F | Select from quadruplet of single-precision floating-point values in zmm2 and zmm3/m512 using imm8, interleaved result pairs are stored in zmm1, subject to writemask k1. |
Op/En | Operand 1 | Operand 2 | Operand 3 | Operand 4 |
RMI | ModRM:reg (r, w) | ModRM:r/m (r) | Imm8 | NA |
RVMI | ModRM:reg (w) | VEX.vvvv (r) | ModRM:r/m (r) | Imm8 |
FV | ModRM:reg (w) | EVEX.vvvv (r) | ModRM:r/m (r) | Imm8 |
Description
Selects a single-precision floating-point value of an input quadruplet using a two-bit control and move to a desig-nated element of the destination operand. Each 64-bit element-pair of a 128-bit lane of the destination operand is interleaved between the corresponding lane of the first source operand and the second source operand at the gran-ularity 128 bits. Each two bits in the imm8 byte, starting from bit 0, is the select control of the corresponding element of a 128-bit lane of the destination to received the shuffled result of an input quadruplet. The two lower elements of a 128-bit lane in the destination receives shuffle results from the quadruple of the first source operand. The next two elements of the destination receives shuffle results from the quadruple of the second source operand.
EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register updated according to the writemask. Imm8[7:0] provides 4 select controls for each applicable 128-bit lane of the destination.
VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. Imm8[7:0] provides 4 select controls for the high and low 128-bit of the destination.
VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAX_VL-1:128)
of the corresponding ZMM register destination are zeroed. Imm8[7:0] provides 4 select controls for each element of the destination.
128-bit Legacy SSE version: The source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAX_VL-1:128) of the corresponding ZMM register destination are unmodified. Imm8[7:0] provides 4 select controls for each element of the destination.
Operation
Select4(SRC, control) {
CASE (control[1:0]) OF
0:
TMP (cid:197)SRC[31:0];
1:
TMP (cid:197)SRC[63:32];
2:
TMP (cid:197)SRC[95:64];
3:
TMP (cid:197)SRC[127:96];
ESAC;
RETURN TMP
}
VPSHUFPS (EVEX encoded versions when SRC2 is a vector register)
(KL, VL) = (4, 128), (8, 256), (16, 512)
TMP_DEST[31:0] (cid:197) Select4(SRC1[127:0], imm8[1:0]);
TMP_DEST[63:32] (cid:197) Select4(SRC1[127:0], imm8[3:2]);
TMP_DEST[95:64] (cid:197) Select4(SRC2[127:0], imm8[5:4]);
TMP_DEST[127:96] (cid:197) Select4(SRC2[127:0], imm8[7:6]);
IF VL >= 256
TMP_DEST[159:128] (cid:197) Select4(SRC1[255:128], imm8[1:0]);
TMP_DEST[191:160] (cid:197) Select4(SRC1[255:128], imm8[3:2]);
TMP_DEST[223:192] (cid:197) Select4(SRC2[255:128], imm8[5:4]);
TMP_DEST[255:224] (cid:197) Select4(SRC2[255:128], imm8[7:6]);
FI;
IF VL >= 512
TMP_DEST[287:256] (cid:197) Select4(SRC1[383:256], imm8[1:0]);
TMP_DEST[319:288] (cid:197) Select4(SRC1[383:256], imm8[3:2]);
TMP_DEST[351:320] (cid:197) Select4(SRC2[383:256], imm8[5:4]);
TMP_DEST[383:352] (cid:197) Select4(SRC2[383:256], imm8[7:6]);
TMP_DEST[415:384] (cid:197) Select4(SRC1[511:384], imm8[1:0]);
TMP_DEST[447:416] (cid:197) Select4(SRC1[511:384], imm8[3:2]);
TMP_DEST[479:448] (cid:197)Select4(SRC2[511:384], imm8[5:4]);
TMP_DEST[511:480] (cid:197) Select4(SRC2[511:384], imm8[7:6]);
FI;
FOR j (cid:197) 0 TO KL-1
i (cid:197) j * 32
IF k1[j] OR *no writemask*
THEN DEST[i+31:i] (cid:197) TMP_DEST[i+31:i]
ELSE
IF *merging-masking*
; merging-masking
THEN *DEST[i+31:i] remains unchanged*
ELSE *zeroing-masking*
; zeroing-masking
DEST[i+31:i] (cid:197) 0
FI
FI;
ENDFOR
DEST[MAX_VL-1:VL] (cid:197) 0
VPSHUFPS (EVEX encoded versions when SRC2 is memory)
(KL, VL) = (4, 128), (8, 256), (16, 512)
FOR j (cid:197) 0 TO KL-1
i (cid:197) j * 32
IF (EVEX.b = 1)
THEN TMP_SRC2[i+31:i] (cid:197) SRC2[31:0]
ELSE TMP_SRC2[i+31:i] (cid:197) SRC2[i+31:i]
FI;
ENDFOR;
TMP_DEST[31:0] (cid:197) Select4(SRC1[127:0], imm8[1:0]);
TMP_DEST[63:32] (cid:197) Select4(SRC1[127:0], imm8[3:2]);
TMP_DEST[95:64] (cid:197) Select4(TMP_SRC2[127:0], imm8[5:4]);
TMP_DEST[127:96] (cid:197) Select4(TMP_SRC2[127:0], imm8[7:6]);
IF VL >= 256
TMP_DEST[159:128] (cid:197) Select4(SRC1[255:128], imm8[1:0]);
TMP_DEST[191:160] (cid:197) Select4(SRC1[255:128], imm8[3:2]);
TMP_DEST[223:192] (cid:197) Select4(TMP_SRC2[255:128], imm8[5:4]);
TMP_DEST[255:224] (cid:197) Select4(TMP_SRC2[255:128], imm8[7:6]);
FI;
IF VL >= 512
TMP_DEST[287:256] (cid:197) Select4(SRC1[383:256], imm8[1:0]);
TMP_DEST[319:288] (cid:197) Select4(SRC1[383:256], imm8[3:2]);
TMP_DEST[351:320] (cid:197) Select4(TMP_SRC2[383:256], imm8[5:4]);
TMP_DEST[383:352] (cid:197) Select4(TMP_SRC2[383:256], imm8[7:6]);
TMP_DEST[415:384] (cid:197) Select4(SRC1[511:384], imm8[1:0]);
TMP_DEST[447:416] (cid:197) Select4(SRC1[511:384], imm8[3:2]);
TMP_DEST[479:448] (cid:197)Select4(TMP_SRC2[511:384], imm8[5:4]);
TMP_DEST[511:480] (cid:197) Select4(TMP_SRC2[511:384], imm8[7:6]);
FI;
FOR j (cid:197) 0 TO KL-1
i (cid:197) j * 32
IF k1[j] OR *no writemask*
THEN DEST[i+31:i] (cid:197) TMP_DEST[i+31:i]
ELSE
IF *merging-masking*
; merging-masking
THEN *DEST[i+31:i] remains unchanged*
ELSE *zeroing-masking*
; zeroing-masking
DEST[i+31:i] (cid:197) 0
FI
FI;
ENDFOR
DEST[MAX_VL-1:VL] (cid:197) 0 VSHUFPS (VEX.256 encoded version)
DEST[31:0] (cid:197)Select4(SRC1[127:0], imm8[1:0]);
DEST[63:32] (cid:197)Select4(SRC1[127:0], imm8[3:2]);
DEST[95:64] (cid:197)Select4(SRC2[127:0], imm8[5:4]);
DEST[127:96] (cid:197)Select4(SRC2[127:0], imm8[7:6]);
DEST[159:128] (cid:197)Select4(SRC1[255:128], imm8[1:0]);
DEST[191:160] (cid:197)Select4(SRC1[255:128], imm8[3:2]);
DEST[223:192] (cid:197)Select4(SRC2[255:128], imm8[5:4]);
DEST[255:224] (cid:197)Select4(SRC2[255:128], imm8[7:6]);
DEST[MAX_VL-1:256] (cid:197)0
VSHUFPS (VEX.128 encoded version)
DEST[31:0] (cid:197)Select4(SRC1[127:0], imm8[1:0]);
DEST[63:32] (cid:197)Select4(SRC1[127:0], imm8[3:2]);
DEST[95:64] (cid:197)Select4(SRC2[127:0], imm8[5:4]);
DEST[127:96] (cid:197)Select4(SRC2[127:0], imm8[7:6]);
DEST[MAX_VL-1:128] (cid:197)0
SHUFPS (128-bit Legacy SSE version)
DEST[31:0] (cid:197)Select4(SRC1[127:0], imm8[1:0]);
DEST[63:32] (cid:197)Select4(SRC1[127:0], imm8[3:2]);
DEST[95:64] (cid:197)Select4(SRC2[127:0], imm8[5:4]);
DEST[127:96] (cid:197)Select4(SRC2[127:0], imm8[7:6]);
DEST[MAX_VL-1:128] (Unmodified)
Intel C/C++ Compiler Intrinsic Equivalent
VSHUFPS __m512 _mm512_shuffle_ps(__m512 a, __m512 b, int imm);
VSHUFPS __m512 _mm512_mask_shuffle_ps(__m512 s, __mmask16 k, __m512 a, __m512 b, int imm);
VSHUFPS __m512 _mm512_maskz_shuffle_ps(__mmask16 k, __m512 a, __m512 b, int imm);
VSHUFPS __m256 _mm256_shuffle_ps (__m256 a, __m256 b, const int select);
VSHUFPS __m256 _mm256_mask_shuffle_ps(__m256 s, __mmask8 k, __m256 a, __m256 b, int imm);
VSHUFPS __m256 _mm256_maskz_shuffle_ps(__mmask8 k, __m256 a, __m256 b, int imm);
SHUFPS __m128 _mm_shuffle_ps (__m128 a, __m128 b, const int select);
VSHUFPS __m128 _mm_mask_shuffle_ps(__m128 s, __mmask8 k, __m128 a, __m128 b, int imm);
VSHUFPS __m128 _mm_maskz_shuffle_ps(__mmask8 k, __m128 a, __m128 b, int imm);
SIMD Floating-Point Exceptions
None
Other Exceptions
Non-EVEX-encoded instruction, see Exceptions Type 4.
EVEX-encoded instruction, see Exceptions Type E4NF.