olbrecht.net journal VMware server on ReadyNAS 4.2.11

VMware server on ReadyNAS 4.2.11

This is an updated and extended how-to, in which I will explain how you can install VMware Server 2 on the Netgear ReadyNAS PRO / 3200 / 4200 platform:

It applies to the following versions:

  • ReadyNAS firmware version Raidiator 4.2.11
  • VMware Server v2.0.2-203138.i386

First of all, let’s prepare:
1. Upgrade your ReadyNAS to the current firmware release using the Frontview control panel -> System -> Update -> Remote -> Reboot.
2. Decide on where you want to put your virtual machines, I decided to put them on a share of their own which I created. I called it “vm”. Create this share via Frontview. This makes sense in so far as VMware will create the files for virtual machines with “root” privileges, meaning only the root user will have access to these files via SMB or other access methods unless you reset share privileges via Frontview.
3. Decide where you want to put the ReadyNAS source code and VMware installation packages. I chose the already existing backup-share.
4. Install the EnableRootSSH addon to gain access to the ReadyNAS via SSH. You can download this add-on here: http://www.readynas.com/download/addons/x86/4.2/EnableRootSSH_1.0-x86.bin and install it via Frontview in System -> Update -> Local. Reboot.
5. You will need an ssh client such as putty.
6. You will need to register an account on vmware.com to download the free VMware Server. Get the i386 package, not the 64-bit package. This guide was compiled against version 2.0.2-203138.i386

Now let’s get started:
With putty, connect to your ReadyNAS and login as the “root” user. The password should be the same as your FrontView admin-password. You are now at the command prompt for your NAS.

#Note the following line:
Linux YOURNASNAME 2.6.33.4.RNx86_64.2.1 #1 SMP Wed May 19 19:36:51 PDT 2010 x86_64 GNU/Linux
#If you did all the earlier preparation right, your NAS will be running the x86_64 Linux Kernel in version 2.6.33.4. Let’s continue some preparations:

apt-get update && apt-get install build-essential

#This will install the necessary Linux components for compiling your kernel and VMware server. There will be a prompt if you really want to install these packages, as well as a prompt that a few packages could not be authenticated. Answer both with YES.

#Then let’s change into our work directory:
cd /c/backup
#We are now on the backup share. All shares you create via Frontview will be created within the directory /c/. Avoid using other Linux paths as they probably will be on the system partition of the NAS, which is only 5GB. It is not a good idea to have that run out of space.

#Then we will download the ReadyNAS GPL Package for Raidiator 4.2.11:
wget -q http://www.readynas.com/download/GPL/RNDP6xxx_4.2.11_WW_src.zip
#Let’s unpack it:
unzip -q RNDP6xxx_4.2.7_WW_src.zip -d ./GPL
#Now we will have to compile a new kernel to get the modules we need to run VMware Server.
cd GPL/linux-2.6.33.4
make ARCH=x86_64 oldconfig && make ARCH=x86_64
#This is going to take a while. You can go get another cup of coffee.

#We will need to tell Linux that we compiled a new kernel and its associated modules, as well as where to find it. For this we make a symbolic link:

ln -s /c/backup/GPL/linux-2.6.33.4/ /usr/src/linux
KERN_DIR=/usr/src/linux

#There has been a slight change in symlinks with kernel 2.6.32 onward which requires us to create two additional symlinks. Without these, VMware setup won’t work later on:

cd /usr/src/linux/include/linux
ln -s ../generated/utsrelease.h
ln -s ../generated/autoconf.h .

Now on to install the VMware server. First let’s copy the downloaded archive for the VMware server on the backup share.
Back in the ssh session, we’re going to unpack it, but first let’s change back into the backup share directory:

cd /c/backup
#and let’s unpack it:
gzip -d VMware-server-2.0.2-203138.i386.tar.gz
tar –xvf VMware-server-2.0.2-203138.i386.tar.gz

#Now we need to install some patches for VMware server to make it compatible with our kernel version:
wget http://risesecurity.org/~rcvalle/VMware-server-2.0.2-203138-update-2.patch

The following is taken from Ramon de Carvalho Valle at rise security (http://risesecurity.org/2010/04/02/vmware-server-2-0-2-update-patch-2/). Thanks to him for creating the patch.

#Extract VMware Server modules:
# Change working directory to vmware-server-distrib/lib/modules/source/
$ tar -xf vmci.tar
tar -xf vmmon.tar
tar -xf vmnet.tar
tar -xf vsock.tar

#Apply the patch:

#Change working directory to vmware-server-distrib/
patch -p1 < ../VMware-server-2.0.2-203138-update-2.patch

#Archive VMware Server modules again:

#Change working directory to vmware-server-distrib/lib/modules/source/
$ rm -f vmci.tar
rm -f vmmon.tar
rm -f vmnet.tar
rm -f vsock.tar
tar -cf vmci.tar vmci-only/
tar -cf vmmon.tar vmmon-only/
tar -cf vmnet.tar vmnet-only/
tar -cf vsock.tar vsock-only/

#And now let’s go install the VMware server:

cd vmware-server-distrib
./vmware-install.pl

#You will get several prompts now which you can all accept at face value except for one: You *must* change the default value for where to put the storage for virtual machines to a path within the /c/ directory. For this I prepared my “vm” share. Hence, I changed the path to “/c/vm/”. This is to make sure your virtual machines won’t kill the system volume of the ReadyNAS.

End notes and limitations:
Thanks to chirpa at the ReadyNAS forum for helping with my questions about the kernel and GPL package and for supplying me with early access to the GPL resources.
Enjoy.

Known limitations:
– The VMware server webinterface seems to have its issues on browsers other than Internet Explorer. I recommend you install the VMware Infrastructure Client to access the VMware host and its virtualised guests.
– Sometimes if you manually restart the VMware services by running “/etc/init.d/vmware restart”, the virtual network service does not want to properly restart. This requires re-running the configuration script “/usr/bin/vmware-config.pl”.

2 thoughts on “VMware server on ReadyNAS 4.2.11”

  1. Hello,

    I tried to install VMware on a readynas pro 4.2.13.
    Somehow it failed…..
    The last entry’s:
    arch/x86/boot/compressed/vmlinux.bin.lzma: No such file or directory
    AS arch/x86/boot/compressed/piggy.o
    gcc: arch/x86/boot/compressed/piggy.S: No such file or directory
    gcc: no input files
    make[2]: *** [arch/x86/boot/compressed/piggy.o] Error 1
    make[1]: *** [arch/x86/boot/compressed/vmlinux] Error 2
    make: *** [bzImage] Error 2

    When i run the command again it gives:
    scripts/kconfig/conf -o arch/x86/Kconfig
    #
    # configuration written to .config
    #
    scripts/kconfig/conf -s arch/x86/Kconfig
    CHK include/linux/version.h
    CHK include/generated/utsrelease.h
    CALL scripts/checksyscalls.sh
    CHK include/generated/compile.h
    LZMA arch/x86/boot/compressed/vmlinux.bin.lzma
    /bin/sh: lzma: command not found
    MKPIGGY arch/x86/boot/compressed/piggy.S
    arch/x86/boot/compressed/vmlinux.bin.lzma: No such file or directory
    AS arch/x86/boot/compressed/piggy.o
    gcc: arch/x86/boot/compressed/piggy.S: No such file or directory
    gcc: no input files
    make[2]: *** [arch/x86/boot/compressed/piggy.o] Error 1
    make[1]: *** [arch/x86/boot/compressed/vmlinux] Error 2

    Any change that i can succesfull install VMware?

  2. Just install lzma:

    apt-get install lzma
    run the make command again

Comments are closed.

Related Post

ThinkPad X1 Carbon (34602SG) – First impressionsThinkPad X1 Carbon (34602SG) – First impressions

It’s been a while since I had any interesting tech that I was actually able to write about. Today, that changed with the arrival of the brand-new Lenovo ThinkPad X1 Carbon. For the international audience, I will write this and maybe a few follow-up articles in English as opposed to German.

WP_000096

Specs wise, it’s pretty standard with the 3rd gen Core i5 3427U processor, 8GB of RAM and 256GB SSD. The Ericsson WWAN card is included, the USB 10/100 Mbit/s Ethernet adapter is not. I’ll have to check if and when I can expect delivery of that. I did not order from Lenovo directly, so I will have to go through my dealer for that. Anyway, enough has been written about the specs and parts in other places.

WP_000097

Because I haven’t seen any other shots of a final system board, I included my own. As you can see, the RAM is soldered to the board. Opened up, more than half the system is taken up by its battery. Just to make clear: This was “just for fun”, there were no build issues whatsoever with my model that would have necessitated opening up the system. As far as build quality is concerned, this machine easily beats any one of the older ThinkPad models I owned or worked with. For the record: X21, T42p, T60p, T61, T400, T400s, T500, T410, T510, T420, X220, T430. Yes, I know that’s a lot but I spent quite a while in recent years supporting a fleet of ThinkPads for my last employer.

The X1 Carbon has probably the stiffest base I’ve ever experienced on a laptop. There’s less give than I have in my 11” MacBook Air (2011) which is impressive, considering the X1 has a much larger chassis that could bend. Lenovo did change the color finish on the bezel and palm rest surrounding the keyboard. It is now much closer to the soft-touch finish normally found on the display cover. Oddly enough, I prefer its texture and softness to that of the new all-glass touchpad. Now, the latter is a big improvement to the touchpads found previously on ThinkPads (even the newer ones that started to be introduced with T400s). Its somehow not quite as smooth as you would expect from a glass touchpad – something of a problem for me as I have dry skin and I noticed it had something of a sandpaper effect on my fingertips. For those who absolutely have to use a touchpad: The ones build by Apple are still the top of the crop. This being a ThinkPad however, there’s still the good-old TrackPoint and it hasn’t changed a bit.

The LCD screen is of the TN persuasion and it’s a pretty good one. Colors are vivid and the contrasts are excellent to my eyes. Other people have noted the LCD grid. The effect indeed is noticeable if you have really good vision and you’re looking at a mostly white screen (e.g. MS Word). I mostly just noticed it because I read about it and looked for it. In regular use with what I’d call an ergonomic distance between your eyes and the screen it’s much harder to see, certainly if you don’t have perfect eyesight like me. The resolution is still spot on, 1600 by 900 on a 14” screen is the sweet spot for me. It’s enough to enable some multitasking on the road while keeping the machine portable. For serious work I still recommend a 24” or larger external screen.

I can’t say I spent too much time with the stock Windows installation. It’s not as bad as other PCs I’ve seen (HP, Sony) but it’s probably not worth keeping if you are the least bit technical and know how to install Windows and drivers. It’s a long shot from the Microsoft signature builds. Anyway, I wasn’t going to have Windows 7 on this machine anyway and progressed to installing Windows 8 Professional RTM on it:

WP_000102

Here’s a couple of pointers that might help you avoid some of the stumbling blocks I met:

  • If you’re going to install Windows 8 on this machine, put the setup files on a USB stick formatted with FAT32 (UEFI won’t boot the installer off NTFS).
  • Download all the drivers for the X1 Carbon from the Lenovo Beta site here except for Video and WWAN. Install these drivers first!
  • Now download the SCCM driver bundle for Windows 7 here. Also download the Intel Smart Connect drivers here. Unpack and point device manager to these folders to install drivers for all the remaining unrecognized devices.
  • Don’t install beta Intel HD graphics drivers, use the update drivers function in device manager and have Windows pull new drivers off Windows Update
  • The Windows built-in driver for the Intel 6205 WLAN card has a wrong default setting: It doesn’t have 802.11n mode enabled. If you don’t enable that in device properties, you will likely only see 54Mbit/s connections. Newer drivers from Intel aren’t available yet but should be out along with drivers for Intel Wireless Display by October 26th. Wireless antenna performance is great though, as I have come to expect from a ThinkPad. Full signal on the 5GHz band where my Mac struggles to keep a connection.
  • I didn’t manage to get the WWAN card to work using the beta driver for Windows 8, the Windows 7 driver however worked perfectly.

Some general early impressions about system performance and such:

  • It’s very quick to boot and shut down running Windows 8. Resume from stand-by is nearly instantaneous.
  • Battery runtime for me seems to be around 5 hours right now with the power profile set to balanced, the display at around half its maximum brightness, WLAN and WWAN enabled. This includes time when the system was still syncing data from my SkyDrive and Exchange mailbox in Outlook, indexing and me installing all the little tools I like to have at the ready. Given that we’re still very early as far as driver support for Windows 8 goes (and that I believe Lenovo’s Power Manager still has some extra tricks that are not yet available), I’m pretty happy with that. Recharging the battery using rapid charge takes care of remaining worries.
  • As a touch typist and die-hard ThinkPad enthusiast, the new keyboard is easy to get used to. I still miss the 7th-row key placements and keys like “pause” but it’s something you get used to pretty quickly. Key feel and responsiveness is nice and key travel is better than any other Ultrabook (or Macbook) I’ve tried before. I especially like how the keyboard on the X1 Carbon is a part of the bezel. It’s a much cleaner and nicer visual look which I found distracting on the T430.
  • You might want to keep credit cards away from the bottom left corner of the base. That’s where you find the magnet keeping the lid closed.

That’s it for my early thoughts. The X1 Carbon for me is the perfect workhorse computer right now. I don’t need computationally intensive applications on a daily basis (that’s what servers and desktops are for!) and I appreciate the portability. I’ll probably buy a second power supply and I’m seriously considering the USB 3.0 dock.

Otherwise this computer is what I always thought the Macbook Air should have been: Black, no-nonsense, non-glare, non-shiny, all serious, with a great keyboard and a little red dot right smack-dab in the middle where it belongs.

-Jan

Home Assistant Remote Access networkingHome Assistant Remote Access networking

I’ve written this guide for the documentation pages of the Home Assistant iOS companion app. As I’m rather proud of them I’m republishing the guide here. Jan

Companion app networking

Having your Home available anywhere and everywhere you go is important, whether you forgot to turn off the stove or you want to check the camera views because of an alert.

Because we want your smart home to be private and secure on the web, many parts of the puzzle need to align just right so everything works as you expect. This guide aims to help you understand the requirements, some of the complexities and our recommended typical solutions to setting up network access to your Home Assistant instance:

The basics: How the app talks to your Home Assistant

In order for the app to talk to HA, it needs to know its address. Just within your home network you might know that your Home Assistant is on an IP like 192.168.1.4 and listening on port 8123. If you use Hass.io and haven’t changed any of the defaults, Home Assistant will also be reachable at http://hassio.local:8123.
This is all fine and will work perfectly well as long as you never take your phone or tablet outside your home, but what if you do?
The easiest way is to subscribe to Nabu Casa Cloud for the small monthly fee of $5 US, which will solve all this for you and you can (almost) stop reading here, as well as supporting further development of Home Assistant. Nabu Casa Cloud acts as a “smart” proxy on the internet, tunnelling your frontend in an encrypted manner from your home to your phone, regardless where you are and without requiring opening your home network to inbound traffic from the internet.
If you don’t want to use Nabu Casa Cloud (which is fine, but you should still subscribe and enjoy the warm feeling of supporting Home Assistant), you need HA to be accessible from the internet. This requires opening a port on your router and getting a name for your Home Assistant on the internet. While it is possible to have your HA use Port 8123 internally and have your router do a port-forwarding from say the default https port of 443 to 8123, we recommend you NOT do this for reasons of simplicity which we will explain later. You also need a name for your Home Assistant as hassio.local is a private domain suffix that does not exist on the internet.

Dynamic DNS

Most non-business internet connections have at least one of two drawbacks: Your internet service provider typically does not give you a static IP (meaning the public IP address your modem/router is assigned will change every once in a while or even every 24 hours) and some ISPs don’t even give you a “real” IP address as they do not have enough addresses to give out. This last scenario is very common on cable providers and especially in Asia/Pacific. If your ISP says they use Carrier-grade NAT (CG-NAT) or something like Dual Stack lite (DS-lite) you likely will have this problem. If you’re impacted please see the CG-NAT and IPv6 addendum.
For dynamic, public IP addresses the solution is simple: Typically users choose a dynamic dns service such as duckdns.org which will create a unique name (e.g. my-home.duckdns.org) that is supported to be updated via your router to always point to your public address. If you have created the port-forward of TCP 8123 on your router’s public interface to TCP 8123 on your internal Home Assistant IP (say 192.168.1.4), your Home Assistant is now available on the web. You could declare victory at this point and stop but don’t – because everything at this point is unencrypted and we want you to enjoy HA in a private, secure manner.

Hairpin NAT

At this point of setting up we need to check one capability of your router: Hairpin NAT (otherwise known as NAT reflection or NAT loopback). What this means is the ability of your router to mirror a request from its inside (LAN) interface to its outside (WAN) address back to an internal IP address (in this case, your Home Assistant), thus reflecting or hairpinning the traffic. It’s easy to check if this works: Just open a browser on your phone or PC while connected to your home network and opening http://my-home.duckdns.org:8123 – if it works, you have hairpin NAT working and can go on to the next section. Most current routers will support NAT hairpinning out of the box, there are however some routers (especially if you got your router from your ISP) that do not have this ability or have it disabled. If this is the case, you need to check if you can enable it on your router or, if you can’t, you will need to set up Split Brain DNS.

Securing the connection

We’ll stay with our DuckDNS example. Using http://my-home.duckdns.org:8123 works, but anyone could be reading your traffic. Let’s change that! The DuckDNS Hass.io add-on will create a free, trusted and valid LetsEncrypt SSL certificate to use on your Home Assistant. Just follow the installation instructions here and here and you will have secure, public access to your Home Assistant. What’s great about using the DuckDNS add-on is that it uses the LetsEncrypt DNS challenge, whereby during requesting the certificate it proves “ownership” of the domain by creating a temporary DNS record. If you use a different DNS provider other than DuckDNS, you can use the LetsEncrypt add-on for Hass.io which supports proving ownership of the name either via the DNS or the http challenge. The latter requires port-forwarding TCP Port 80 on your router to your internal Home Assistant IP on TCP Port 80.

With Hairpin NAT working and SSL on your DNS domain you can now access Home Assistant securely both on the internet and at home and you should add base_url: my-home.duckdns.org:8123 to the http: section of your configuration.yaml. This is not strictly necessary but will help with auto-detection during onboarding of the iOS app, as the app will know where and how to reach your Home Assistant.

Split Brain DNS

So what’s this split brain DNS (also known as split horizon DNS, split-DNS) thing and why would I need it? If your router doesn’t do hairpin NAT, you still need to access your Home Assistant via the public DNS name, e.g. my-home.duckdns.org. Why is that? Because valid encryption via https and SSL certificates only works for public DNS names. What this means is that the certificate name on your server needs to match the DNS name you enter in your browser or app. This is fine with hairpin NAT available but becomes an issue when it’s not. In this case you need to “split” the answer your browser/app gets when it looks up the IP address behind my-home.duckdns.org – you need one answer for devices on your home network that points to the internal IP address of your Home Assistant (e.g. 192.168.1.4) and another answer for when you’re out and about [e.g. 104.25.25.31.
The easiest solution to this is to use the Hass.io add-on AdGuard Home. This can also be set up on some routers (e.g. pfSense or UniFi Security Gateways) but we’ll continue on using our example guide with the tools provided via Hass.io: So you’ve now installed the AdGuard Home add-on and changed the DNS server on your router DHCP settings to the address of your Home Assistant. You should now go to the AdGuard Home setting page in your Hass.io panel and there go to Settings -> DNS settings, then scroll down to the bottom where you have a box titled: DNS rewrites
Here you click Add DNS rewrite and enter your my-home.duckdns.org and the internal IP 192.168.1.4 of your Home Assistant, followed by clicking on save. What happens now is that all DNS queries for the address my-home.duckdns.org from inside your home network will be answered by AdGuard via its own rewrite table, thus pointing toward the internal address of your Home Assistant instead of asking public DNS servers on the web which will all answer with the public IP of your router.
Even if you don’t need split brain DNS, you may also want to set this up as it will enable you to access Home Assistant via it’s public name even when your internet connection is down and hairpin NAT won’t work. One less dependency on the Cloud!

Setting up the iOS app

If you’ve followed all our advice, your app should find your Home Assistant instance automatically during onboarding when connected to your home wifi network. You can also go through onboarding anywhere you’re connected to the internet by manually entering https://my-home.duckdns.org:8123 and the setup will finish with that address in the External URL field in the app connection settings. There should be no need to enter an internal URL as the same address will work regardless of where your phone is connected.
If you want to (or have to) use Nabu Casa Cloud, there are two more steps required:
– In iOS settings, set location access permissoin for Home Assistant to Always. This is required because starting with iOS 13, Apple will only let apps with such permission have access to the wifi SSID which is used by the app to determine whether to use internal or external URLs.
– Once permission is given, add your Home Assistant address to internal URL (if you come from the top of this article, this could be http://hassio.local:8123
– If you’ve set up Nabu Casa Cloud in your Home Assistant the checkbox to “Connect via Cloud” should now become available. Once you activate the checkbox, external URL will become deactivated.

Addendum: CG-NAT

If your ISP doesn’t give you a public IPv4 address you’re down to basically only two solutions: You can call your ISP and ask if they can give you a real address or if there is an upgrade for your connection available (oddly enough, asking nicely will work with many ISPs out there) or use Nabu Casa Cloud.

Addendum: IPv6

Since IPv6 has been rolling out for the last 20 years, chances are that along with an IPv4 address your home network will also have been provided with IPv6 addresses from your ISP. So your Home Assistant host may have it’s IPv4 address of 192.168.1.4 as well as an IPv6 address of e897:5571:5f66:21dc:51c1:28d8:3bdc:6724. Here’s where our advice for not changing the TCP port you forward to Home Assistant comes in:
– Home Assistant will listen for traffic on 192.168.1.4:8123 and [e897:5571:5f66:21dc:51c1:28d8:3bdc:6724]:8123
– If you really want to future proof your setup, you will have two DNS records for my-home.duckdns.org: An A-record pointing to your routers public IPv4 address which will be port-forwarded to your HA hosts internal address and an AAAA-record, which points directly to the IPv6 address of your HA host. Now when you access your HA remotely either protocol could be used, since all you’re entering will be https://my-home.duckdns.org:8123. If you had changed the Port on your Router to the https default 443, the connection would now fail if you suddenly ended up with a working IPv6 setup as nothing is listening on [e897:5571:5f66:21dc:51c1:28d8:3bdc:6724]:443.

Addendum: Reverse Proxy via NGINX

There are cases when having Home Assistant serve https is impossible or incompatible with some of your devices. This can be especially true with ESP-based low power IoT hardware that communicates via RestAPI and just doesn’t have the horsepower to do the SSL encryption. One example is the konnected.io Integration which requires Home Assistant to be reachable via http.
So to accomodate this and still have encryption for external access, we use a reverse proxy like NGINX. What a reverse proxy does is to act as an intermediate for your clients (Browser or App). The client talks to the reverse proxy securely via https and the proxy passes through this traffic to Home Assistant over an unencrypted http connection. Staying with our Hass.io example, we’ll assume you have already set up DuckDNS and LetsEncrypt. You should now install the Hass.io add-on NGINX Home Assistant SSL proxy and configure it according to the docs.

In your configuration.yaml file the following changes are needed:

http:
  use_x_forwarded_for: true     # To ensure HA understands that client requests come via reverse proxy
  trusted_proxies:
    - 172.30.32.0/23            # In Hass.io we need to add the Docker subnet
    - 127.0.0.1                 # Add the localhost IPv4 address
    - ::1                       # Add the localhost IPv6 address
  base_url: my-home.duckdns.org # Note we no longer have a :8123 Port here
  # Uncomment or remove the SSL certificate lines:
  # ssl_certificate: /ssl/fullchaim.pem
  # ssl_key: /ssl/privkey.pem

Once that’s done your router’s port-forwarding should be TCP 443 to your Home Assistant internal IP 192.168.1.4 Port 443. Do NOT create a forward to 192.168.1.4 Port 8123 as that is now unencrypted http and should only be accessible from your local network.
You can now access your Home Assistant via https://my-home.duckdns.org both internally and externally while having http://192.168.1.4:8123 available to be used as unencrypted endpoint for things like konnected.io.
Note: If you don’t use the NGINX Hass.io add-on but instead roll your own, please ensure that websockets support is enabled.

Fixing Windows Admin Center ‘Can’t verify whether “cluster_name” is online’Fixing Windows Admin Center ‘Can’t verify whether “cluster_name” is online’

So you’re trying to add your Hyper-Converged Cluster to Windows Admin Center and it’s giving you the “Can’t verify whether “cluster_name” is online” treatment. You’ve checked DNS, upgraded WAC/Honolulu and tested installing it on multiple servers and workstations. Nothing helped. I have good news! for you:

I ran into this immediately after Project Honolulu became public and have been banging my head continuously. Here’s what to do:

Check the Event Viewer\Applications and Services Logs\Microsoft-ServerMangementExperience for the following entry:

400 - CimException: The xsi:type attribute (p1:MSCluster_Property_Node_PrivateProperties) does not identify an existing class.

This indicates your that WAC is connecting fine to your cluster but is running into an issue where it’s missing some cluster property.

I’ll have to give props to Robert Hochmayr here as he pointed me to the solution:

There are two private properties that are set on the cluster and its nodes which through some combination of events (like adding nodes to the cluster at a later point in time) are missing from nodes. You can find out by running the following PowerShell command on one of your S2D cluster nodes:

get-clusternode | Get-ClusterParameter

The output will look something like this:

Object Name Value Type
------ ---- ----- ----
S2D-01 S2DCacheBehavior 88 UInt64
S2D-01 S2DCacheDesiredState 2 UInt32
S2D-03 S2DCacheDesiredState 2 UInt32
S2D-03 S2DCacheBehavior 88 UInt64

Note that this was a four node cluster.. Nodes S2D-02 and S2D-04 are missing!

Off to the registry to fix it:

At HKLM\Cluster\Nodes\x\Parameters there should be two entries for the above cluster parameters. On my systems, the full registry key Parameters was missing from nodes 1 and 4 (go figure…). I added them *on each host* by running the following command lines:

REG ADD HKEY_LOCAL_MACHINE\Cluster\Nodes\1\Parameters /f /v  "S2DCacheBehavior" /t REG_QWORD /d "88"
REG ADD HKEY_LOCAL_MACHINE\Cluster\Nodes\1\Parameters /f /v "S2DCacheDesiredState" /t REG_DWORD /d "2"
REG ADD HKEY_LOCAL_MACHINE\Cluster\Nodes\4\Parameters /f /v  "S2DCacheBehavior" /t REG_QWORD /d "88"
REG ADD HKEY_LOCAL_MACHINE\Cluster\Nodes\4\Parameters /f /v "S2DCacheDesiredState" /t REG_DWORD /d "2"

Checking I now get the correct PowerShell output:

get-clusternode | Get-ClusterParameter

Object Name Value Type
------ ---- ----- ----
S2D-01 S2DCacheBehavior 88 UInt64
S2D-01 S2DCacheDesiredState 2 UInt32
S2D-02 S2DCacheBehavior 88 UInt64
S2D-02 S2DCacheDesiredState 2 UInt32
S2D-03 S2DCacheBehavior 88 UInt64
S2D-03 S2DCacheDesiredState 2 UInt32
S2D-04 S2DCacheBehavior 88 UInt64
S2D-04 S2DCacheDesiredState 2 UInt32

Once this was added I was immediately able to add the cluster to Windows Admin Center. No reboots or service restarts were needed.

-Jan