Nov 13, 2015

EMC Federation Enterprise Hybrid Cloud 3.1 Disaster Recovery

.




In the Federation Enterprise Hybrid Cloud v3.0, the foundation architecture has been changed to match the disaster recovery (DR) architecture from v2.5.1




Specifically, this means that an additional SSO and SQL server are deployed in the Automation Pod, allowing the other components in the Automation pod to fail over and still have access to those services.
This change allows customers to add DR functionality at a later time without having to make any architectural changes to the underlying infrastructure.





Protected sites can now be used as recovery sites as well, allowing for either site to fail while retaining the ability to provision new resources.
In this example, the site in Phoenix serves as the recovery site for Boston, and vice versa. If the Boston site were to experience a failure, the Boston resources could be activated in Phoenix as part of the disaster recovery protocol.
If the Boston site remained down for an extended amount of time, new resources that are intended to be run in Boston could be provisioned at the Phoenix recovery site in the interim.
When the Boston site comes back online, the newly created resources could then be replicated to Boston and set as “protected” as if they had been originally created there. While all of this is taking place, operations in Phoenix are unaffected.




In the Federation Enterprise Hybrid Cloud v3.0, a new Catalog Item was created that allows us to pair clusters from within vRealize Automation. This workflow allows us to set the properties and key values that are necessary for the cluster to function within a DR environment.
This was done manually before. Now, it is automated in a workflow/catalog item.
Once the cluster pair is created, it is still required to manually create the recovery plan in SRM based on the cluster name and properties.

.
.
..

Nov 12, 2015

EMC Cloud Tiering Appliance Fundamentals CTA Hardware Overview







CTA’s most recent platform model is the Generation 8, which is built on the Intel 1U 2-socket rack-mounted server. This platform uses a Dual Intel Sandy Bridge 2.0 GHz processor. It contains 16 GB of RAM and four 900 GB SAS drives configured in a RAID 5 with one hot spare.
For network interface ports, the server comes with four gigabit Ethernet ports on the motherboard. There is also an available IO module providing an additional 2-ports for 10GbE.





Another platform option for the CTA is the Generation 7 (Gen-7) model. This model is a Dell R710 2U server with only 4 GB of RAM instead of the Gen-8’s 16 GB.
The Gen-7 comes with four 1 TB SATA drives in a RAID 1 configuration with two hot spares. There are two gigabit Ethernet ports for network connection. With this model, only 250 million files can be archived per appliance.





The oldest CTA hardware model is the Generation 6 (Gen-6). The Gen-6 is built on the Dell 2950 server. This model contains a Dual Intel 3.0 GHz Xeon processor with 4 GB of RAM.
There are four 250 GB SATA drives in a RAID 5 configuration with one hot spare. As with the Gen-7 model, the Gen-6 only has two gigabit Ethernet ports for networking and has a limitation of 250 million archived files per appliance





In the event that a Gen-8 CTA or CTA-HA becomes unresponsive, a network console management page is available to allow users to control the appliance or reboot the system.
The console can be accessed through a dedicated management port on the back of the appliance which is labeled MGMT. For security purposes, network console management should be enabled on a network with access limited to system administrators only.

The requirements to implement a CTA/VE solution are four virtual CPUs, 16 GB of virtual RAM, 1 TB of virtual disk space and two Gigabit virtual interfaces must be reserved. For a CTA/VE-HA, the only differences compared to a CTA/VE is that only 4 GB of virtual RAM is needed along with 100 GB of virtual disk space. Both CTA/VE and CTA/VE-HA support ESXi server 4.1 and later, as well as ESX server 4.0 and later.
Disk space for the CTA/VE can be thin provisioned from the backend. Make sure that 1 TB is available in case the CTA/VE will need to archive close to its limit of 500 million files.





For many environments, using a single CTA or CTA/VE network interface will satisfy networking requirements. However, there are cases when more complex topologies are used.
The CTA supports combining Ethernet interfaces to form a bonded interface. This topology is used for high availability, to protect the CTA installation from a single point of failure. You may also use two subnets or more, one for the primary storage tier, and another for either the secondary tier or for a management interface. One port can be used for one subnet, and another port for the second subnet. The CTA also supports VLAN tagging and VLAN bonding, which is a VLAN interface created on top of a bond interface. Be aware that bonding or VLAN bonding is not supported on the CTA/VE.

EMC VPLEX Management Overview






VPLEX provides two methods of management through the VPLEX Management Console. The
Management Console can be accessed through a command line interface (CLI) as well as a
graphical user interface (GUI). The CLI is accessed by connecting with SSH to the Management
Server and then entering the command vplexcli. This command causes the CLI to telnet to port
49500.
The GUI is accessed by pointing a browser at the Management Server IP using the https protocol.
The GUI is based on Flash and requires the client to have Adobe Flash installed.
Every time the Management Console is launched, it creates a session log in the /var/log/VPlex/cli/
directory. The log is created when launching the CLI as well as the GUI. This can be helpful in
determining which commands were run while a user was using VPLEX.





EMC VPLEX software architecture is object-oriented, with various types of objects defined with
specific attributes for each. The fundamental philosophy of the management infrastructure is
based on the idea of viewing, and potentially modifying, attributes of an object.





The VPLEX CLI is based on a tree structure similar to the structure of a Linux file system.
Fundamental to the VPLEX CLI, is the notion of “object context”, which is determined by the
current location or pwd within the directory tree of managed objects.
Many VPLEX CLI operations can be performed from the current context. However, some
commands may require the user to cd to a different directory before running the command

Useful Link

 VPLEX Architecture Deployment

EMC VPLEX 5.0 Architecture Guide

EMC VPLEX: ELEMENTS OF PERFORMANCE. AND TESTING BEST

EMC VPLEX Features and Capabilities

.




Extent mobility is an EMC VPLEX mechanism to move all data from a source extent to a target
extent. It is completely non-disruptive to any layered devices, and completely transparent to
hosts using virtual volumes involving those devices. Over time storage volumes can become
fragmented by adding and deleting extents. Extent mobility can be used to help defrag
fragmented storage volumes. Extent mobility can not occur across clusters.





The data on devices can be moved to other devices within the same cluster or at remote cluster in
a VPLEX Metro. The same cannot be said for extent mobility as data on extents can only be
moved to other extents within the same cluster





The data mobility feature allows you to non-disruptively move data on an extent or device to
another extent or device in the same cluster. The procedure for moving extents and devices is the
same and uses either devices or extents as the source or target.
You can run up to a total of 25 extent and device migrations concurrently. The system allocates
resources and queues any remaining mobility jobs as necessary. View the status and progress of
a mobility job in Mobility Central.
Mobility Central provides a central location to create, view, and manage all extent and device
mobility jobs.





A Distributed Device is a RAID-1 device that is created within a VPLEX Metro or Geo. The
distributed device is composed of two local devices that exist at each site and are mirrors of each
other. Read and write operations pass through the VPLEX WAN. Data is protected because writes
must travel to the back-end storage of both clusters before being acknowledged to the host.
Metro offers synchronous updates to the distributed device while Geo offers an asynchronous
method to allow for greater distances between sites





Distributed devices are mirrored between clusters in a VPLEX Metro. In order to create a
distributed device a local device must exist at both sites. The distributed RAID-1 device that is
created upon the two local devices can only be as large as the smaller of the two devices. This is
due to the way RAID-1 operates. Distributed devices are created through the Distributed Devices
option.





VPLEX Consistency groups aggregate volumes together to ensure the common application of a set
of properties to the entire group. Consistency Groups are created for sets of volumes that require
the same I/O behavior in the event of a link failure, like those from a single application. In the
event of a director, cluster, or inter-cluster link failure, Consistency Groups prevents possible data
corruption. The optional VPLEX Witness failure recovery semantics apply only to volumes in
Consistency Groups. In addition, you can even move a Consistency Group from one cluster to
another if required.

EMC VPLEX Architecture

.




The diagram displays the VS 2 engines released with VPLEX 5.0. The example shows a Quad-Engine VPLEX configuration. Notice the VPLEX numbering starts from the bottom up. Every engine contains a standard power supply. The difference between a quad and a dual engine is that other than having half the engines, a Dual engine also has one fewer SPS. A single engine implementation contains a Management Server, Engine, and SPS only

All supported VPLEX configurations ship in a standard, single rack. The shipped rack contains the selected number of engines, one Management Server, redundant Standby Power Supplies (SPS) for each engine, and any other needed internal components. The pair of SPS units provides DC power to the engines in case there is a loss of AC power. The batteries in the SPSs can hold a charge up to 10 minutes. However, the maximum hold time is 5 minutes.
The dual and quad configurations include redundant internal FC switches for LCOM connection between the directors. In addition, dual and quad configurations contain redundant Uninterruptible Power Supplies (UPS) that service the FC switches and the Management Server. GeoSynchrony is pre-installed on the VPLEX hardware and the system is pre-cabled, and also pretested.
Engines are numbered 1-4 from the bottom to the top. Any spare space in the shipped rack is to be preserved for potential engine upgrades in the future. Since the engine number dictates its physical position in the rack, numbering will remain intact as engines are added during a cluster upgrade.




VPLEX engines can be deployed as a single, dual, or quad cluster configuration depending upon the number of front-end and back-end connections required. The VPLEX’s advanced data caching algorithms are able to detect sequential reads to disk. As a result, VPLEX engines are able to fetch
data from disk to cache in order to improve host read performance.
VPLEX engines are the brains of a VPLEX system, they contain two directors, each providing frontend and back-end I/O connectivity.
They also contain two directors, redundant power supplies, fans, I/O modules, and management modules. The directors are the workhorse components of the system and are responsible for processing I/O requests from the hosts, serving and maintaining data in the distributed cache, providing the virtual-to-physical I/O translations, and interacting with the storage arrays to service I/O.
A VPLEX VS2 Engine has 10 I/O modules, with five allocated to each director. Each director has
one four-port 8 Gb/s Fibre Channel I/O module used for front-end SAN (host) connectivity and
one four-port 8 Gb/s Fibre Channel I/O module used for back-end SAN (storage array) connectivity. Each of these modules has 40 Gb/s effective PCI bandwidth to the CPUs of their
corresponding director. A third I/O module, called the WAN COM module, is used for inter-cluster
communication. Two variants of this module are offered, one four-port 8 Gb/s Fibre Channel
module and one two-port 10 Gb/s Ethernet module. The fourth I/O module provides two ports of
8 Gb/s Fibre Channel connectivity for intra-cluster communication. The fifth I/O module for each
director is reserved for future use.





The VPLEX product family has currently released three configuration options: VPLEX Local, Metro,
and Geo.
VPLEX Local provides seamless, non-disruptive data mobility and the ability to manage and mirror
data between multiple heterogeneous arrays from a single interface within a data center. VPLEX
Local consists of a single VPLEX cluster. It contains a next-generation architecture that allows
increased availability, simplified management, and improved utilization and availability across
multiple arrays.
VPLEX Metro enables active/active, block level access to data between two sites within
synchronous distances. The distance is limited not only by physical distance but also by host and
application requirements. Depending on the application, VPLEX clusters should be installed with
inter-cluster links that can support not more than 5ms1 round trip delay (RTT). The combination
of virtual storage with VPLEX Metro and virtual servers enables the transparent movement of
virtual machines and storage across synchronous distances. This technology provides improved
utilization and availability across heterogeneous arrays and multiple sites.
VPLEX Geo enables active/active, block level access to data between two sites within
asynchronous distances. VPLEX Geo enables more cost-effective use of resources and power.
VPLEX Geo extends the distance for distributed devices up to and within 50ms RTT. As with any
asynchronous transport media, you must also consider bandwidth to ensure optimal performance.
Due to the asynchronous nature of distributed writes, VPLEX Geo has different availability and
performance characteristics than Metro.





EMC VPLEX is a next generation architecture for data mobility and information access.
It is based on unique technology that combines scale out clustering and advanced data caching,
with the unique distributed cache coherence intelligence to deliver radically new and improved
approaches to storage management.
This architecture allows data to be accessed and shared between locations over distance via a
distributed federation of storage resources





Internal management of the VPLEX is performed with a dedicated IP network. This is a high-level
architectural view of the management connections between the Management Server and
directors. In this picture there are NO internal VPLEX IP switches. The directors are daisy chained
together via two redundant Ethernet connections.
The Management Server also connects via two redundant Ethernet connections to the directors in
the cluster. The Management Server is the only VPLEX component that gets configured with a
“public” IP on the data center network. From the data center IP network, the Management Server
can be accessed via SSH or HTTPS.





Cache coherence creates a consistent global view of a volume.
Distributed cache coherence is maintained using a directory. There is one directory per user
volume and each directory is split into chunks (4096 directory entries within each). These chunks
exist only if they are populated. There is one directory entry per global cache page, with
responsibility for:
• Tracking page owner(s) and remembering the last writer
• Locking and queuing



.