Aug 18, 2015

Red Hat - Cluster Suite 5 - Architecture

Red Hat Cluster Suite (RHCS) is an integrated set of software components that can be deployed in a variety of configurations to suit your needs for performance, high-availability, load balancing, scalability, file sharing, and economy.
RHCS consists of the following major components
 
 
  • Cluster infrastructure — Provides fundamental functions for nodes to work together as a cluster: configuration-file management, membership management, lock management, and fencing.
  • High-availability Service Management — Provides failover of services from one cluster node to another in case a node becomes inoperative.
  • Cluster administration tools — Configuration and management tools for setting up, configuring, and managing a Red Hat cluster. The tools are for use with the Cluster Infrastructure components, the High-availability and Service Management components, and storage.
  • Linux Virtual Server (LVS) — Routing software that provides IP-Load-balancing. LVS runs in a pair of redundant servers that distributes client requests evenly to real servers that are behind the LVS servers.
You can supplement Red Hat Cluster Suite with the following components, which are part of an optional package (and not part of Red Hat Cluster Suite):
  • GFS — GFS (Global File System) or GFS2 (Global File System 2) provides a cluster file system for use with Red Hat Cluster Suite. GFS/GFS2 allows multiple nodes to share storage at a block level as if the storage were connected locally to each cluster node.
  • Cluster Logical Volume Manager (CLVM) — Provides volume management of cluster storage. 
     
     
    CCS consists of a daemon and a library. The daemon stores the XML file in memory and responds to requests from the library (or other CCS daemons) to get cluster information. There are two operating modes quorate and nonquorate. Quorate operation ensures consistency of information among nodes. Non-quorate mode connections are only allowed if forced. Updates to the CCS can only happen in quorate mode.
    If no cluster. conf exists at startup, a cluster node may grab the first one it hears about by a multicast announcement.
    The OpenAIS parser is a "plugin" that can be replaced at run time. The cman service that plugs into OpenAIS provides its own configuration parser, ccsd. This means I etc/ais/openais. conf is not used if cman is loaded into OpenAIS; ccsd is used for configuration, instead.

  •  
    The cluster manager, an OpenAIS service, is the mechanism for configwing, controlling, querying, and calculating quorum for the cluster. The cluster manager is configured via /etc/cluster /cluster. conf (ccsd), and is responsible for the quorum disk API and functions for managing cluster quorum.
     
     
  • Users frequently commented that while they found value in the GUI interfaces provided for cluster configuration, they did not routinely install X and Gtk libraries on their production servers. Conga solves this problem by providing an agent that is resident on the production servers and is managed through a web interface, but the GUI is located on a machine more suited for the task.
    The elements of this architecture are:
    • luci is an application server which serves as a central point for managing one or more clusters, and cannot run on one of the cluster nodes. 1 uci is ideally a machine with X already loaded and with network connectivity to the cluster nodes. 1 uci maintains a database of node and user information. Once a system running ricci authenticates with a luci server, it will never have to re-authenticate unless the certificate used is revoked. There will typically be only one luci server for any and all clusters, though that doesn't have to be the case.  
    •  
    • ricci is an agent that is installed on all servers being managed.
    •  
    • Web Client is typically a Browser, like Firefox, running on a machine in your network.
    The interaction is as follows. Your web client securely logs into the 1 uc i server. Using the web interface, the administrator issues commands which are then forwarded to the rice i agents on the nodes being managed.



    Useful Link





     
     
     

3 comments: