Hello Everyone,
Trying to understand the partition tables created by ESXi 5.1. I have highlighted my questions in bold, hopefully that helps, since I'm providing a lot of information that I beleive is relevant...I am deploying ESXi 5.1 via Auto Deploy with Stateless Caching. On those hosts, with partedUtil I can see 7 partitions. Example from 1 host:
gpt
17844 255 63 286677120
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
2 1843200 10229759 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
3 10229760 286677086 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
Based on VMware KB: Using the partedUtil command line utility on ESXi and ESX, and the output from partedUtil, partion 1 (~4 MB) is an EFI system partition, partition 7 (~110 MB) is the diagnostic (core dump) partition, and partition 3 is the local VMFS datastore.
Then I see partitions #2 (4 GB), 5 (250 MB), 6 (250 MB), 8 (285MB) that are labeled as "linuxNative" with the KB mapping the GUID to "Basic Data." I assume one of those contained the stateless cached ESXi image.
Running "df -h" shows me 4 vfat volumes that line up with the 4 linuxNative partitions by size.
vfat 4.0G 0.0B 4.0G 0% /vmfs/volumes/51c8571d-27b60cd8-30b6-0017a4772014
vfat 249.7M 152.0M 97.7M 61% /vmfs/volumes/b4ccb172-76d5921a-601d-0114951ed141
vfat 249.7M 4.0K 249.7M 0% /vmfs/volumes/99d7c172-cdcd3c5f-27f7-f296e44dbcc3
vfat 285.8M 96.0K 285.7M 0% /vmfs/volumes/51c85701-478b8a90-a879-0017a4772014
Looking at disk usage of each, I can surmise that /vmfs/volumes/b4ccb172-76d5921a-601d-0114951ed141 (which has 152 MB written) is the ESXi image (for stateless caching.) So what then are the other 3 paritions used for?
Now, I also want to increase the vmkDiagnostic partition from 110MB to 300MB. Refering to VMware KB: ESXi hosts with larger workloads may generate partial core dumps , I modified the ESXi advanced setting as follows via the host profile:
VMkernel.Boot.diskDumpSlotSize = 300
Rebooted the host. After reboot, host is in compliance with the profile, I can see this advanced setting is adjusted correctly to 300 MB, but the partition table for the local disk looks the same (110 MB.) I was expecting it to modify the diagnostic parition automatically to 300 MB. Am I missing a step for getting the diagnostic partition increased to 300 MB? Perhaps because partition #8 exists essentially at the next sector where #7 ends, it has not room to grow the existing partition?
I have not yet tried destroying the entire disk and rebooting the host, which I would expect would force it to rebuild a new partition table based on the stateless caching setting in the host profile + the customized core dump slot size, so I may try that tomorrow.
Thanks!