Skip to main content

Unable to See and add new ESXi hosts in Nexus 1000v


After the VMWare upgrade from 5.x to 6.x and Nexus 1000v upgrade from 4.2 to 5.2 you are unable to add new hosts into the Nexus 1000v distributed switch, although the older hosts are seed added to the N1Kv distributed switch and running fine without any issues.

This happens because Nexus 1000v has no knowledge of new versions of vCenter Server in its postgress database.

You have to manually add the new version in the vCenter database to support the new Version.

First you need to log in the the VCDB on command line, and for that you need to find the userID and password.

To get the userID and password, open C:\ProgramData\VMware\vCenterServer\cfg\vmware-vpx\vcdb.properties

vcdb.properties file contents should look like this

driver = org.postgresql.Driverdbtype = PostgreSQLurl = jdbc:postgresql://localhost:5432/VCDB
username = vcpassword = {FNr2Aad>ws8Xo<Qpassword.encrypted = false

Grab the username and password (default userID happend to be "vc" and the password

To add the new version

Go to this Path on DOS Prompot
C:\Program Files\VMware\vCenter Server\vPostgres\bin\

Run Command
C:\Program Files\VMware\vCenter Server\vPostgres\bin>psql -U vc VCDB
Enter password as found above in the the file at C:\ProgramData\VMware\vCenterServer\cfg\vmware-vpx\vcdb.properties
Password for user vc:

Show database
SELECT * FROM VPX_DVS_COMPATIBLE;

Insert the new version into the database with follwoing command
insert into VPX_DVS_COMPATIBLE VALUES
(42,'esx','6.0+');
(42,'embeddedEsx','6.0+');

Here 42 is the device ID and can be seen in the first column of the output of command
SELECT * FROM VPX_DVS_COMPATIBLE;

Again show the database bases, it should now list the support for vCenter 6.0+ version.

Show database
SELECT * FROM VPX_DVS_COMPATIBLE;

You should see the New vCenter version has been added to the database.

Exit the Database command prompt by typing \q

Restart the vCenter Server and add the hosts to Nexus 1000v normally.

Comments

Popular posts from this blog

How to import Putty Saved Connections to mRemoteNG

Just started using mRemoteNG and its being very cool to connect to different remote connection with different protocols e.g Window Remote Desktop, VNC to Linux, SSH, HTTP connection etc. from a single application. As new user I configured some remote desktop connection which was quite easy to figure out. But when I wanted to add SSH connections, it came in my mind to import all of the saved connections in the putty. But I couldn't figure it out how can it be done, though it was quite easy and here are the steps. Open your mRemoteNG Create a folder if you want segregation of multiple networks Create a new connection Enter the IP address of remote server under connection in Config pane Under the config pane, select protocol " SSH version 2 ".  Once you select protocol to SSH version 2 you are given option to import putty sessions, as shown in the snap below. In the above snap, I have imported CSR-AWS session from my saved sessions in Putty.

BGP Local Preference Controlling Outbound Traffic in BGP

In BGP, Local Preference is used to control the outbound traffic path. It helps you decide which egress point (exit point) should be used when you have multiple connections to external networks, such as ISPs. Local Preference is an attribute that is local to your AS and is shared with all iBGP peers but not with eBGP neighbors. Higher Local Preference = More preferred outbound path. Example Scenario : You have two external links: ISP1 (via CE1) and ISP2 (via CE2). You want traffic to prefer ISP1 for all outbound traffic. Network Topology : CE1 (connected to ISP1): 10.0.1.1/30 CE2 (connected to ISP2): 10.0.2.1/30 iBGP Router (Internal) connected to both CE1 (10.0.1.2/30) and CE2 (10.0.2.2/30). Configuration on CE1 (Higher Local Preference) : Create a route map to set the local preference to 200 for routes learned from CE1: route-map SET_LOCAL_PREF permit 10 set local-preference 200 In the BGP configuration for CE1, apply this route map to the neighbor: router bgp 65001 ne...

BGP MED: Managing Inbound Traffic with Multi-Exit Discriminator

The Multi-Exit Discriminator (MED) is used in BGP to control inbound traffic into your AS. It tells a neighboring AS which entry point into your network it should prefer when there are multiple links between your AS and the neighboring AS. The lower the MED value , the more preferred the path. MED is only honored between the same neighboring AS . Example Scenario : You are connected to ISP1 via two routers, CE1 and CE2 , and want to control which router ISP1 uses to send traffic into your AS. Network Topology : CE1 (connected to ISP1): 10.0.1.1/30 CE2 (connected to ISP1): 10.0.2.1/30 iBGP Router (Internal) connected to both CE1 (10.0.1.2/30) and CE2 (10.0.2.2/30). Configuration on CE1 (Lower MED, More Preferred) : Create a route map to set the MED to 50 for CE1: route-map SET_MED permit 10 set metric 50 Apply this route map to the neighbor in the BGP configuration for CE1: router bgp 65001 neighbor 10.0.1.1 remote-as 65000 neighbor 10.0.1.1 route-map SET_MED out Configuratio...