Quantcast
Channel: VMware Communities : Discussion List - All Communities
Viewing all 176483 articles
Browse latest View live

Unable to upgrade from 7.1 > 7.5 but 7.4 > 7.5 works...anyone else?

$
0
0

I've been having a really tough time upgrading from 7.1 to 7.5 on my external connection server, the internal connection server works fine.  I think the breakdown is on the LDAP sync beacuse that is usually where it fails plus i found this in the logs:

 

Property(C): LDAP_UPGRADE_ERROR_MESSAGE1 = LDAP is not ready for an upgrade. Error in reaching the server [VDM_LDAP_ERROR_SERVER_NAME] to get replication status. Please resolve the issue and try again. To get more information, run the command "repadmin /showrepl [VDM_LDAP_ERROR_SERVER_NAME]:389"

 

I did however upgrade to 7.1 > 7.4 and then to 7.5 that worked fine and everything is up and running now.  Has anyone had this issue?  I have logged a support case to see what they come back with.  The documentation says you can go right from 7.1 > 7.5


Horizon v7.50 with VCSA v6.7 and ESXi v6.5U2

$
0
0

Hi. We are having some issues provisioning instant clones using Horizon v7.50 with vCenter v6.7, yet our hypervisors are still ESXi v6.5U2.

 

All hosts show these errors in a connection server log.

 

2018-06-12T00:23:27.448-04:00 WARN  (1008-1878) <CacheRefreshThread-https://blahblah:443/sdk> [ObjectStore] Host: host-26 not available for provisioning. ConnectionState=connected, PowerState=poweredOn, MaintenanceMode=false, AdminRequestedMaintenance=0, MarkedAsFailed=false, Host Version: 6.5.0, Host API Version: 6.5

 

Creating a new Instant Clone pool fails with this

No full links for internal VMs found

 

I am just not sure what is the cause or effect? The hosts are healthy, and all have access to the proper datastores, etc. Can anyone point me in a direction? Is there an issue with VCSA v6.7 and trying to use the new Instant Clone API yet the hypervisors are still v6.5U2? Thanks,,,

update manager with no vmotion and local disk

$
0
0

Hi

 

I have a 6.5 environment with 10 different hosts. All hosts use local storage and have only one VM on them, expect for one that has vcenter on it. If I use update manager to restart the host, will v center shutting down cause a problem. I'm thinking its may be better to just use the zip file and esxcli commands to update the host, but I like to avoid that if possible. Its a unique environment meant for one application.

ESX-Host not responding to VC

$
0
0

Hi guys,

 

we had some issues, restarting and migrating vm's on one esx-host. So I tried starting the management agents, like described in KB1003490 (VMware Knowledge Base ).

 

/etc/init.d/hostd restart     seemed to work fine, but with

 

/etc/init.d/vpxa restart      I got following output (after a long time):

 

watchdog-vpxa: Terminating watchdog process with PID 35024

vpxa stopped.

 

So I thought the vpxa ist not running and tried to start with "/etc/init.d/vpxa start" , but esx says, that the service is running.

 

 

I tried to repeat the procedure, but the vpxa-restart hangs with following output:

 

/etc/init.d/vpxa restart

watchdog-vpxa: PID file /var/run/vmware/watchdog-vpxa.PID does not exist

watchdog-vpxa: Unable to terminate watchdog: No running watchdog process for vpxa

vpxa stopped.

 

 

It would be fine, if there would be a solution, without downtime, because there are still VMs running.

Horizon VM display stuck on Larger?

$
0
0

Hi folks,

 

We had an equipment move last weekend, so our vSphere hosts were powered off and moved.

Everything came back up fine, except this one user now has their VM display set to Larger (pic below) and we don't know why.

I updated the VM's Hardware, Tools version, and connected to it myself and display is set to Small as expected.

However, when he connects to it, it's Larger and grayed out so that he can not change it. 

We've tried toggling Allow Display Scaling from the client but not making a difference.

 

Anyone heard of this or have a suggestion for how to resolve?

 

Horizon Servers and Agent 7.3.1

Horizon Client 4.6

 

display.jpg

I can't install VMware ESXi 6.7.0 on my computer - No Network Adapters

$
0
0

Hi, I can't install VMware ESXi 6.7.0 on my computer, I have an error :  No Network Adapters.

I have a dell aurora r7 with killer E2500 gigabit Ethernet controller (pcnet-fast 79C971 pci id 1022:2000)

I don't find the VIB

 

install Esxi.jpg

Can you help me PLZ

vRA Workflow with XAAS to Payload

$
0
0
I have a vRA blueprint with a vSphere VM and an XAAS component.  It simply has two dropdown boxes (selections that are made in each box is an array).  I have a event broker subscription that runs (for this blueprint) for POST activation of the VM.   The vRO workflow for this subscription has a PAYLOAD input param.  I need to get the two selection box values in PAYLOAD as well.  How is this done?  I have the properties and values for the VM, but those dropdown bloxes are not listed anywhere.  Any help is appreciated.

How could I get the version of the horizon clients used

$
0
0

After CVE-2018-6964 I started looking into getting a report of the versions that horizon clients are using to connect to my infrastructure. Currently there's no easy way to accomplish that.

On this forum there's a tip here that recommneds using "Horizon toolbox", that's a tool that eases the interaction with vmware horizon, it add some value because it gets an statistic of the connected versions, platforms, ..

But it can't correlate those versions to specific users. Are you aware of anything like that? In case there's a new, bigger vulnerability it will be needed.


vAPI status code 400

$
0
0

I have been struggling to get the vapi to work in vro 7.1.  I am able to add the metamodel and the endpoint without issue, but whenever I try to run any examples I get

 

 

Workflow execution stack:

***

item: 'List all metamodel services/item2', state: 'failed', business state: 'null', exception: 'HTTP response with status code 400 (enable debug logging for details):  (Workflow:List all metamodel services / Scriptable task (item1)#7)'

workflow: 'List all metamodel services' (9d3c08d7-0d40-4178-944a-6294d4c29e03)

|  'attribute': name=errorCode type=string value=HTTP response with status code 400 (enable debug logging for details):  (Workflow:List all metamodel services / Scriptable task (item1)#7)

|  'input': name=endpoint type=VAPI:VAPIEndpoint value=dunes://service.dunes.ch/CustomSDKObject?id='ENDPOINT--https___XXXXX.XXXX.XXX'&dunesName='VAPI:VAPIEndpoint'

|  'no outputs'

*** End of execution stack.

 

I was not able to see the added vapi modules in the api explorer either.  I tried rebooting, restarting service, etc.  I then updated the vapi plugin to the 7.4 version and now I see the added modules in the api explorer, however I still get the 400 error with any workflows.

Unable to Cleanup Virtual Machine

$
0
0

I'm unable to "Clean Up Virtual Machine" (in General Settings)

 

 

VMWare Fusion Version 10.1.2 (8502123)

Mac OS X 10.12.6 (16G1314)

 

 

The error message I get is:

 

 

"Unable to clean up deleted files:

 

 

Read beyond end of object"

 

 

This means there are lots of *.vmdk files left behind after deleting old snapshots which aren't being used but which are taking up lots of space.

 

 

Any help much appreciated.

Linux P2V Failed at 98% Converter 6.1

$
0
0

Hello

 

I'm trying to convert a old physical Linux server to a virtual one.

it keeps failing at 98% with a status :

FAILED: An error occurred during the conversion: 'InitrdNativePatcher failed to generate initrd image: /usr/lib/vmware-converter/initrdGenSuse.sh failed with return code: 1, and message: * /mnt/p2v-src-root/dev has 3 files /sbin/mkinitrd: illegal option -- t * user script returning code 1 * unmounting /mnt/p2v-src-root/dev /mnt/p2v-src-root/proc and /mnt/p2v-src-root/sys ERROR:
failed running mkinitrd vmlinuz-2.6.37.6-24-desktop initrd-2.6.37.6-24-desktop with chroot /mnt/p2v-src-root '

Fejl p2v.png

 

 

Any suggestions ?

"Session Expired, trying to renew" message when logging on to Identity Manager

$
0
0

After creating a 3 node cluster, users often receive a "Session Expired, trying to renew" message when logging on.  The message usually disappears after a few seconds, but for some users is remaining and preventing log on.  Will setting the service.numberOfLoadBalancers option resolve this?  It's listed as optional in the documentation.

Unable to Connect to MKS: Connection Terminated by server - ESX 5.5 VMs console

$
0
0

Hi,

I am facing a weird issue here on ESxi 5.5 host and its VMs. The VMs were running fine also the ESXI host was reachable. But suddenly i lost the connectivity and when I checked from vsphere client, all the VMs were in hung state, when I tried to open the consoles for it, I got below message "

"Unable to Connect to MKS: Connection Terminated by server "

After some research on google, I found that ESx host reboot will fix it and/or   hp-ams service stop will fix it. Since I was not able to get into the server remotely, I had to reboot the host manually. After reboot, the VMs are running fine and everything normal now, but when I tried to search for hp-ams service on host. I didnt find that service or I did not even find the hphelper log file anywhere on the host.

 

Can someone please let me know what could be the other reason if hp-ams is not there and how can I stop this crash occurring again ? 

ESXi 6.7 kernel panic during Heap_Free in dlmalloc

$
0
0

I'm not sure if this is the right place to post this, but I couldn't find a bug tracker for ESXi.

 

The host has been running ESXi 6.0 and now 6.7 without issue for a couple years. After reboot, it's all back to normal; so, I don't think it's a hardware problem. I wasn't doing anything in particular when this happened. I.e., no easy repro.

 

Host: VMkernel esxi2 6.7.0 #1 SMP Release build-8169922 Apr  3 2018 14:48:22 x86_64 x86_64 x86_64 ESXi

 

Stack trace:

 

Line prefix: 2018-05-24T00:44:53.953Z, cpu13:2097958)

 

@BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc

Code start: 0x41801cc00000 VMK uptime: 28:20:55:23.811

0x451a0a79b470:[0x41801cd08ca5]PanicvPanicInt@vmkernel#nover+0x439 stack: 0x7520676e69726f6e

0x451a0a79b510:[0x41801cd08ed8]Panic_NoSave@vmkernel#nover+0x4d stack: 0x451a0a79b570

0x451a0a79b570:[0x41801cd512c2]DLM_free@vmkernel#nover+0x657 stack: 0x431c091f1fb0

0x451a0a79b590:[0x41801cd4e4b0]Heap_Free@vmkernel#nover+0x115 stack: 0x451a0a79b630

0x451a0a79b5e0:[0x41801df056ca]CbtAsyncIODone@(hbr_filter)#<None>+0x2b stack: 0x459a40a26800

0x451a0a79b610:[0x41801ccc89a2]AsyncPopCallbackFrameInt@vmkernel#nover+0x4b stack: 0x451a0a79b670

0x451a0a79b640:[0x41801df072a9]Lwd_IssuePendingIO@(hbr_filter)#<None>+0x9a stack: 0x431c09235c50

0x451a0a79b670:[0x41801defa413]DemandLogReadTokenCallback@(hbr_filter)#<None>+0x1a4 stack: 0x41801cf76149

0x451a0a79b7a0:[0x41801ccc89a2]AsyncPopCallbackFrameInt@vmkernel#nover+0x4b stack: 0x459a40a0d000

0x451a0a79b7d0:[0x41801ccc89a2]AsyncPopCallbackFrameInt@vmkernel#nover+0x4b stack: 0x459a40b16b80

0x451a0a79b800:[0x41801cface08]VSCSI_FSVirtAsyncDone@vmkernel#nover+0x59 stack: 0x17eda3ffe77b66

0x451a0a79b810:[0x41801ccc89a2]AsyncPopCallbackFrameInt@vmkernel#nover+0x4b stack: 0x459a40a55398

0x451a0a79b840:[0x41801cc4d96b]FS_IOAccessDone@vmkernel#nover+0x68 stack: 0x1

0x451a0a79b860:[0x41801ccc89a2]AsyncPopCallbackFrameInt@vmkernel#nover+0x4b stack: 0x459a5d587d00

0x451a0a79b890:[0x41801cc678e8]FDSAsyncTokenIODone@vmkernel#nover+0x91 stack: 0x459a5a861900

0x451a0a79b8c0:[0x41801cf65b2a]SCSIDeviceCmdCompleteInt@vmkernel#nover+0x6f stack: 0x459a410ba240

0x451a0a79b930:[0x41801cf66a17]SCSIDeviceCmdCompleteCB@vmkernel#nover+0x2bc stack: 0x41801d8be5c4

0x451a0a79ba10:[0x41801cf68a2e]SCSICompleteDeviceCommand@vmkernel#nover+0xa7 stack: 0x0

0x451a0a79bb10:[0x41801d883fa0]nmp_CompleteCommandForDevice@com.vmware.vmkapi#v2_5_0_0+0x39 stack: 0x459a5b8cb340

0x451a0a79bb70:[0x41801d884468]nmp_CompleteCommandForPath@com.vmware.vmkapi#v2_5_0_0+0x61 stack: 0x418043400d40

0x451a0a79bcc0:[0x41801cf888c4]SCSICompletePathCommand@vmkernel#nover+0x1f5 stack: 0x430469218180

0x451a0a79bd90:[0x41801cf77849]SCSICompleteAdapterCommand@vmkernel#nover+0x13e stack: 0x900000200

0x451a0a79be20:[0x41801d5f1978]SCSILinuxWorldletFn@com.vmware.driverAPI#9.2+0x3f1 stack: 0x430874ce72e0

0x451a0a79bf80:[0x41801cd3e694]WorldletFunc@vmkernel#nover+0xf5 stack: 0x0

0x451a0a79bfe0:[0x41801cf081f2]CpuSched_StartWorld@vmkernel#nover+0x77 stack: 0x0

base fs=0x0 gs=0x418043400000 Kgs=0x0

Lemme know if you need more info.

ESXi 6.7 Keep Getting PANIC bora/vmkernel/main/dlmalloc.c: 4924 - Usage error in dlmalloc

$
0
0

Hi,

 

     After upgrading to ESXi 6.7 form 6.5, I keep getting this error from my host with a purple screen.

 

     PANIC bora/vmkernel/main/dlmalloc.c: 4924 - Usage error in dlmalloc

 

     I reinstalled a few times and still getting the same problem.  Did anyone have the same happening to your hosts?

 

Eddy


Can not upgrade from vCenter 6.5 to 6.7

$
0
0

I'm trying to upgrade my systems to 6.7. Downloaded the vCenter 6.7 ISO, ran the installer, chose Upgrade, got through all the choices. As soon as it starts creating the new VCSA it fails with "Failed to send http data". I can ping the host and vcenter by fqdn, also tried IP addresses. Can't find any hardware incompatibilities in the lists. I can log into the vSphere client, the vCenter web admin portal and via SSH. I've seen a couple posts on the web with the same problem but no solutions yet?

Can not install VCSA 6.7

$
0
0

I've had this going on for over a week now and I have nothing but problems on top of problems with no solution in sight.

Originally I had 2 ESXi hosts - 1 running 6.0 and one running 6.5u1. My 6.5 VCSA was running on the 6.5 host.

All were joined to my Windows AD and I had no issues. Then I started trying to upgrade both to 6.7.

I began with the VCSA as per docs. Mounting the VCSA 6.7 ISO and running the ui-installer.

All pre-checks went through fine. It gets to the point of deploying the VCSA and fails with "Failed to send http data".

 

After troubleshooting this for a few days, I gave up and upgraded the 6.0 host to 6.7 - no issues at all.

I began trying to install the 6.7 VCSA on the new 6.7 host. Always got same results. I went ahead and upgraded my other host to 6.7

It also upgraded with no issues. Now I have 2 fully functional 6.7 hosts that I can log into from any system in my network with no issues,

but I can NOT get the VCSA installed.

 

I tried deploying the OVA, which works fine, but after logging into the VAMI and going through Install it always fails during configuration

the error is always "the FQDN is invalid". At this point the config stops and the appliance is useless and must be deleted.

Next I tried running the installer from one of my AD servers, which is running the DNS as well, and also a VM on the same host I'm trying to deploy on.

This time the deployment phase started immediately and completed. It when through all the post-deployment scripts and then failed at the configuration phase

telling me to do the configuration from the web interface.  I couldn't run that from the server because protected mode is enabled, so I switched back to my

desktop, logged in and started the config and got the same exact failure again.

 

Next I logged into one of my Windows 10 VM's on the host and tried the installer from there. Immediately got the same error as my desktop "Failed to send http data".

As one more check I tried from a different workstation in the building and again got the same http data error at phase one.

 

I have no network issues I can find. DNS works fine. There are entries for my hosts and the vcenter that resolve immediately without fail from any system.

Both my Windows 2016 servers are running DNS and DHCP. Both can access the hosts fine. All my workstations can access everything fine. I have gone through docs

until I've gone blind and I just don't have any other ideas at this time.

Can anyone please come up with something else I can try?

VSAN 6.7 and Optane Disk: Low Performance

$
0
0

I got a 4 nodes VSAN Cluster VSphere 6.7.0  8169922:

 

Model:  HP DL380 GEN10

RAM/node:  164 GB

1 disk group per host, w. 1 INTEL OPTANE P4800X 375 GB  for caching  plus 7 SSD 1,8 GB  for capacity.

 

All the health checks passed.

 

Using HCIBench the performance is:

 

100% Read, Random , 4k up to :  280K iops , 1080 MBs throughput , 2,8 ms latency

70/30%  R/W  Random , 4k up to:  180K iops , 700 MBs throughput, 4,4 ms latency

100% Write, Random , 4k up to: 38K iops , 150 MBs  throughput, 20ms latency

 

The intel optane documentation sais:

 

Throughput  iops random 4kB R/W:  Up to 550/550k

Throughput  iops random 4kB 70/30 R/W:  Up to 500k

 

So 4 Optane * 550k gives around 2200 K >> 280 K  iops

 

Then I must have some problems with my setup.

Someone is using Optane disk and can share his performance results to compare with mine? or give me some advices with

the VSAN configuration and Optane disks?

Moving from 5.5 to 6.7

$
0
0

Hello,

 

We are currently on VMware/(vCenter) 5.5 and looking at potentially moving to the latest VMware version this summer.  Can somebody stir me to the right path.  My thought process of migrating to the latest version below (of course I will try to test the upgrade first from a lab first once I have some information here).

 

Current environment with VMware/vCenter 5.5

-5 Dell hosts (R820) with 256 GM memory on each host.

-2 shared storages (Tegile) with a few iSCSI (VMFS5) and NFS datastores.  Also a Linux server serving NFS for a few VMs

-VSA appliance (vCenter), resides on one of the VMware hosts

- around 120 VMs.

 

Process of migration

 

1) take one VMware host off the cluster (move all VMs off this host and place it in maintenance mode, shutdown).

     - delete host from cluster

     - install latest VMware on same hardware of host taken down (lets called newHost1)

     - bring newHost1 online as standalone, won't be part of the cluster at this stage.  Add datastores from the shared storage (shutdown some VMs and re-add them to newly built newHost1)

 

2) Repeat step 1 for all the 5 hosts

 

2a) Turn off HA/DRS after 2 hosts are taken down.

 

3) Delete old VSA (vCenter) when down to the last VMware host and build new VSA (vCenter).  Re-add all newly built hosts to new vCenter and apply license keys, etc

 

Please note we will be reusing all hardware.  We are expected to get new shared storage next July 2019.  The Dell servers are optional if we want to stay with the current R820 or buy new ones. So the question is it worth the migration now or just wait until we buy new hardware next summer and do a complete new build? Have 2 sets of VMware running concurrently and move the VMs over? I know version 5.5 support ends September of this year.

 

Thanks ahead.

TT

Unable to download OSR. Error message is There was an unexpected error (type=Internal Server Error, status=500). could not download file

$
0
0

Unable to download OSR.  Error message is:

 

Whitelabel Error Page

This application has no explicit mapping for /error, so you are seeing this as a fallback.

Wed Jun 06 14:45:02 PDT 2018

There was an unexpected error (type=Internal Server Error, status=500).

could not download file

Viewing all 176483 articles
Browse latest View live