PIM Architectures

Document ID : KB000047702
Last Modified Date : 14/02/2018
Show Technical Document Details

Question:

How many architectures does the Enterprise Management really have?
 

 

Answer: 

There's a lot, but let me define each one and their purposes on a high-level overview.
 
The Primary Enterprise Manager has a DMS__, a DH__, a DH_WRITER__, and a Tibco Message Queue.
 
The Load Balancer is essentially mirroring all data back to the Primary.  The Load-Balanced management server uses a web service (eACWS) to allow Jboss to show data regarding the DMS__ on the Primary management server. The TIBCO queues are routed up to the Primary Enterprise Manager server.
 
The High Availability has a DMS__, a DH__, a DH_WRITER__, and a Tibco Message Queue shared between the two nodes. Only one can be active and reading data on a shared drive at a time.
 
The Disaster Recovery has a DMS__, a DH__, a DH_WRITER__, and a Tibco Message Queue. The DMS on the primary is subscribed to the DR DMS.
 
The Primary Enterprise Management can add and remove as many Load-Balancers as you would like without affecting the Primary management server.
 
The High Availability management server is clustered with the Primary management server (using Cluster Manager on Windows, Veritas on Linux, or VMware clustering via virtualization.)  The cluster moves and manages the DMS__, DH__, DH_WRITER__ and Tibco Message Queue db to be a shared drive.
 
The High Availability PASSIVE management server services should be stopped, however the endpoint can be up to protect that machine, but make sure the Message Queue, DMS__, DH__, and the DH_WRITER__ are not running because only one management server at a time can access those databases.
 
The Disaster Recovery management server isn’t necessarily clustered.  Services within the Disaster Recovery management server should be running, because anything that is done to the DMS__ on the Primary management server has to be sent down as a subscription to the DMS__ on the Disaster Recovery management server. 
 
The ideal Disaster Recovery topology would not connect to hostnames, it would use a Global Traffic Manager or a F5, so when the Disaster Recovery is enabled, there’s a separate configuration where we don’t point to these Distribution Servers within the Primary node, point to these Distribution Servers in the environment.  The policyfetcher has configurations to define a Disaster Recovery DH__ to connect to.  However, if you’re using Privileged User Password Management, or Shared Account Management, which is agentmanager and reportagent, they get their list of servers from the accommon.ini, it then goes down the list by the order in which the entries were put in, and whichever connection is successful, those two services will remain on that Distribution Server, until it loses a connection – regardless if the Primary Distribution Server comes back online.