I'm not sure if this is the right place to post this, but I couldn't find a bug tracker for ESXi.
The host has been running ESXi 6.0 and now 6.7 without issue for a couple years. After reboot, it's all back to normal; so, I don't think it's a hardware problem. I wasn't doing anything in particular when this happened. I.e., no easy repro.
Host: VMkernel esxi2 6.7.0 #1 SMP Release build-8169922 Apr 3 2018 14:48:22 x86_64 x86_64 x86_64 ESXi
Stack trace:
Line prefix: 2018-05-24T00:44:53.953Z, cpu13:2097958)
@BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc
Code start: 0x41801cc00000 VMK uptime: 28:20:55:23.811
0x451a0a79b470:[0x41801cd08ca5]PanicvPanicInt@vmkernel#nover+0x439 stack: 0x7520676e69726f6e
0x451a0a79b510:[0x41801cd08ed8]Panic_NoSave@vmkernel#nover+0x4d stack: 0x451a0a79b570
0x451a0a79b570:[0x41801cd512c2]DLM_free@vmkernel#nover+0x657 stack: 0x431c091f1fb0
0x451a0a79b590:[0x41801cd4e4b0]Heap_Free@vmkernel#nover+0x115 stack: 0x451a0a79b630
0x451a0a79b5e0:[0x41801df056ca]CbtAsyncIODone@(hbr_filter)#<None>+0x2b stack: 0x459a40a26800
0x451a0a79b610:[0x41801ccc89a2]AsyncPopCallbackFrameInt@vmkernel#nover+0x4b stack: 0x451a0a79b670
0x451a0a79b640:[0x41801df072a9]Lwd_IssuePendingIO@(hbr_filter)#<None>+0x9a stack: 0x431c09235c50
0x451a0a79b670:[0x41801defa413]DemandLogReadTokenCallback@(hbr_filter)#<None>+0x1a4 stack: 0x41801cf76149
0x451a0a79b7a0:[0x41801ccc89a2]AsyncPopCallbackFrameInt@vmkernel#nover+0x4b stack: 0x459a40a0d000
0x451a0a79b7d0:[0x41801ccc89a2]AsyncPopCallbackFrameInt@vmkernel#nover+0x4b stack: 0x459a40b16b80
0x451a0a79b800:[0x41801cface08]VSCSI_FSVirtAsyncDone@vmkernel#nover+0x59 stack: 0x17eda3ffe77b66
0x451a0a79b810:[0x41801ccc89a2]AsyncPopCallbackFrameInt@vmkernel#nover+0x4b stack: 0x459a40a55398
0x451a0a79b840:[0x41801cc4d96b]FS_IOAccessDone@vmkernel#nover+0x68 stack: 0x1
0x451a0a79b860:[0x41801ccc89a2]AsyncPopCallbackFrameInt@vmkernel#nover+0x4b stack: 0x459a5d587d00
0x451a0a79b890:[0x41801cc678e8]FDSAsyncTokenIODone@vmkernel#nover+0x91 stack: 0x459a5a861900
0x451a0a79b8c0:[0x41801cf65b2a]SCSIDeviceCmdCompleteInt@vmkernel#nover+0x6f stack: 0x459a410ba240
0x451a0a79b930:[0x41801cf66a17]SCSIDeviceCmdCompleteCB@vmkernel#nover+0x2bc stack: 0x41801d8be5c4
0x451a0a79ba10:[0x41801cf68a2e]SCSICompleteDeviceCommand@vmkernel#nover+0xa7 stack: 0x0
0x451a0a79bb10:[0x41801d883fa0]nmp_CompleteCommandForDevice@com.vmware.vmkapi#v2_5_0_0+0x39 stack: 0x459a5b8cb340
0x451a0a79bb70:[0x41801d884468]nmp_CompleteCommandForPath@com.vmware.vmkapi#v2_5_0_0+0x61 stack: 0x418043400d40
0x451a0a79bcc0:[0x41801cf888c4]SCSICompletePathCommand@vmkernel#nover+0x1f5 stack: 0x430469218180
0x451a0a79bd90:[0x41801cf77849]SCSICompleteAdapterCommand@vmkernel#nover+0x13e stack: 0x900000200
0x451a0a79be20:[0x41801d5f1978]SCSILinuxWorldletFn@com.vmware.driverAPI#9.2+0x3f1 stack: 0x430874ce72e0
0x451a0a79bf80:[0x41801cd3e694]WorldletFunc@vmkernel#nover+0xf5 stack: 0x0
0x451a0a79bfe0:[0x41801cf081f2]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0
base fs=0x0 gs=0x418043400000 Kgs=0x0
Lemme know if you need more info.