Back to opcode table

MULSD—Multiply Scalar Double-Precision Floating-Point Value

Opcode/Instruction Op /En 64/32 bit Mode Support CPUID Feature Flag Description

F2 0F 59 /r

MULSD xmm1,xmm2/m64

RM V/V SSE2 Multiply the low double-precision floating-point value in xmm2/m64 by low double-precision floating-point value in xmm1.

VEX.NDS.128.F2.0F.WIG 59 /r

VMULSD xmm1,xmm2, xmm3/m64

RVM V/V AVX Multiply the low double-precision floating-point value in xmm3/m64 by low double-precision floating-point value in xmm2.

EVEX.NDS.LIG.F2.0F.W1 59 /r

VMULSD xmm1 {k1}{z}, xmm2, xmm3/m64 {er}

T1S V/V AVX512F Multiply the low double-precision floating-point value in xmm3/m64 by low double-precision floating-point value in xmm2.

Instruction Operand Encoding

Op/En Operand 1 Operand 2 Operand 3 Operand 4
RM ModRM:reg (r, w) ModRM:r/m (r) NA NA
RVM ModRM:reg (w) VEX.vvvv (r) ModRM:r/m (r) NA
T1S ModRM:reg (w) EVEX.vvvv (r) ModRM:r/m (r) NA

Description

Multiplies the low double-precision floating-point value in the second source operand by the low double-precision floating-point value in the first source operand, and stores the double-precision floating-point result in the destina-tion operand. The second source operand can be an XMM register or a 64-bit memory location. The first source operand and the destination operands are XMM registers.

128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAX_VL-1:64) of the corresponding destination register remain unchanged.

VEX.128 and EVEX encoded version: The quadword at bits 127:64 of the destination operand is copied from the same bits of the first source operand. Bits (MAX_VL-1:128) of the destination register are zeroed.

EVEX encoded version: The low quadword element of the destination operand is updated according to the writemask.

Software should ensure VMULSD is encoded with VEX.L=0. Encoding VMULSD with VEX.L=1 may encounter unpre-dictable behavior across different processor generations.

Operation


VMULSD (EVEX encoded version)
IF (EVEX.b = 1) AND SRC2 *is a register*
    THEN
    SET_RM(EVEX.RC);
    ELSE
    SET_RM(MXCSR.RM);
FI;
IF k1[0] or *no writemask*
    THEN
    DEST[63:0] (cid:197) SRC1[63:0] * SRC2[63:0]
    ELSE
    IF *merging-masking*
        ; merging-masking
        THEN *DEST[63:0] remains unchanged*
        ELSE
        ; zeroing-masking
        THEN DEST[63:0] (cid:197) 0
    FI
FI;
ENDFOR
DEST[127:64] (cid:197) SRC1[127:64]
DEST[MAX_VL-1:128] (cid:197) 0
VMULSD (VEX.128 encoded version)
DEST[63:0] (cid:197)SRC1[63:0] * SRC2[63:0]
DEST[127:64] (cid:197)SRC1[127:64]
DEST[MAX_VL-1:128] (cid:197)0
MULSD (128-bit Legacy SSE version)
DEST[63:0] (cid:197)DEST[63:0] * SRC[63:0]
DEST[MAX_VL-1:64] (Unmodified)

Intel C/C++ Compiler Intrinsic Equivalent

VMULSD __m128d _mm_mask_mul_sd(__m128d s, __mmask8 k, __m128d a, __m128d b);
VMULSD __m128d _mm_maskz_mul_sd( __mmask8 k, __m128d a, __m128d b);
VMULSD __m128d _mm_mul_round_sd( __m128d a, __m128d b, int);
VMULSD __m128d _mm_mask_mul_round_sd(__m128d s, __mmask8 k, __m128d a, __m128d b, int);
VMULSD __m128d _mm_maskz_mul_round_sd( __mmask8 k, __m128d a, __m128d b, int);
MULSD __m128d _mm_mul_sd (__m128d a, __m128d b)

SIMD Floating-Point Exceptions

Overflow, Underflow, Invalid, Precision, Denormal

Other Exceptions

Non-EVEX-encoded instruction, see Exceptions Type 3.

EVEX-encoded instruction, see Exceptions Type E3.