VCVTNE2PS2BF16—Convert Two Packed Single Data to One Packed BF16 Data

Opcode/Instruction Op/En 64/32 bit Mode Support CPUID Feature Flag Description
EVEX.128.F2.0F38.W0 72 /r VCVTNE2PS2BF16 xmm1{k1}{z}, xmm2, xmm3/m128/m32bcst A V/V AVX512VL AVX512_BF16 Convert packed single data from xmm2 and xmm3/m128/m32bcst to packed BF16 data in xmm1 with writemask k1.
EVEX.256.F2.0F38.W0 72 /r VCVTNE2PS2BF16 ymm1{k1}{z}, ymm2, ymm3/m256/m32bcst A V/V AVX512VL AVX512_BF16 Convert packed single data from ymm2 and ymm3/m256/m32bcst to packed BF16 data in ymm1 with writemask k1.
EVEX.512.F2.0F38.W0 72 /r VCVTNE2PS2BF16 zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst A V/V AVX512F AVX512_BF16 Convert packed single data from zmm2 and zmm3/m512/m32bcst to packed BF16 data in zmm1 with writemask k1.

Instruction Operand Encoding

Op/En Tuple Operand 1 Operand 2 Operand 3 Operand 4
A Full ModRM:reg (w) EVEX.vvvv (r) ModRM:r/m (r) N/A

Description

Converts two SIMD registers of packed single data into a single register of packed BF16 data.

This instruction does not support memory fault suppression.

This instruction uses “Round to nearest (even)” rounding mode. Output denormals are always flushed to zero and input denormals are always treated as zero. MXCSR is not consulted nor updated. No floating-point exceptions are generated.

Operation

VCVTNE2PS2BF16 dest, src1, src2

VL = (128, 256, 512)

KL = VL/16

origdest := dest

FOR i := 0 to KL-1:

IF k1[ i ] or *no writemask*:

IF i < KL/2:

IF src2 is memory and evex.b == 1:

t := src2.fp32[0]

ELSE:

t := src2.fp32[ i ]

ELSE:

t := src1.fp32[ i-KL/2]

// See VCVTNEPS2BF16 for definition of convert helper function

dest.word[i] := convert_fp32_to_bfloat16(t)

ELSE IF *zeroing*:

dest.word[ i ] := 0

ELSE: // Merge masking, dest element unchanged

dest.word[ i ] := origdest.word[ i ]

DEST[MAXVL-1:VL] := 0

Intel C/C++ Compiler Intrinsic Equivalent

VCVTNE2PS2BF16 __m128bh _mm_cvtne2ps_pbh (__m128, __m128);

VCVTNE2PS2BF16 __m128bh _mm_mask_cvtne2ps_pbh (__m128bh, __mmask8, __m128, __m128);

VCVTNE2PS2BF16 __m128bh _mm_maskz_cvtne2ps_pbh (__mmask8, __m128, __m128);

VCVTNE2PS2BF16 __m256bh _mm256_cvtne2ps_pbh (__m256, __m256);

VCVTNE2PS2BF16 __m256bh _mm256_mask_cvtne2ps_pbh (__m256bh, __mmask16, __m256, __m256);

VCVTNE2PS2BF16 __m256bh _mm256_maskz_cvtne2ps_ pbh (__mmask16, __m256, __m256);

VCVTNE2PS2BF16 __m512bh _mm512_cvtne2ps_pbh (__m512, __m512);

VCVTNE2PS2BF16 __m512bh _mm512_mask_cvtne2ps_pbh (__m512bh, __mmask32, __m512, __m512);

VCVTNE2PS2BF16 __m512bh _mm512_maskz_cvtne2ps_pbh (__mmask32, __m512, __m512);

SIMD Floating-Point Exceptions

None.

Other Exceptions

See Table 2-50, “Type E4NF Class Exception Conditions.”