Apr 7, 2015

EMC VNX2 - Architecture Overview

1. Overview

The VNX Series unifies EMC’s File-based and Block-based offerings into a single product that can be managed with one easy to use GUI. In addition, object storage solutions are available that make use of the VNX storage systems. The VNX Series is a storage solution designed for a wide range of environments that include midtier-to-enterprise.
The VNX unified storage platforms support the NAS protocols (CIFS for Windows and NFS for UNIX/Linux, including pNFS), patented Multi-Path File System (MPFS) as well native block protocols (iSCSI and Fiber Channel). VNX is optimized for:
  • Core IT applications like transactional workloads – Oracle, SAP, SQL, Exchange or SharePoint.
  • Server virtualization and end user computing/VDI.
  • Applications that need traditional file, or block or unified storage.


EMC VNX VNX2 Technical Update - Deep Dive
Overview Link

Virtual VNX: Overview, Architecture & Use Cases

VNX Overview

Vnx series-technical-review

EMC VNX Unified Best Practices For Performance:Applied Best Practices Guide

NAS Meets SAN – an intro to VNX Unified

EMC VNX Monitoring and Reporting
VNX is also a good fit for partner lead configurations optimized for virtual applications with VMware and Hyper-V integration. The VNX with MCx (VNX2) architecture unleashes the power of Flash, taking full advantage of the latest Intel multi-core technology with the introduction of MCx.





EMC VNX with MCx runs the MCx platform. MCx is comprised of two central components: Multi-
Core Cache and Multi-Core RAID. What is VNX multi-core technology and why is it essential? VNX MCx was specifically designed to dynamically and evenly distribute all data services across all CPU cores in the system. MCx allows Multi-core Cache and Multi-core RAID services to scale linearly regardless of the number of available CPU cores.
Flash storage, or Solid State Disks, has exponentially higher potential IO throughput than spinning storage media. With Flash pushing multiple gigabytes per second of random IO through the system, the storage operating system must be designed to keep pace. If the storage OS statically assigns data services to specific cores, Flash will saturate one or more cores, while other cores are under utilized, this will cause the storage OS to become a bottleneck—preventing the power of Flash from being exposed. By dynamically spreading the workload evenly across all of the available cores, MCx empowers VNX to keep pace with this dramatic performance boost, allowing organizations to deliver extreme responsiveness to their performance-critical applications. This translates into more virtual machines, more transactions, and more aggregate IOPS for file services.



2. Architecture

The VNX modular architecture is designed to deliver a native block and file solution with dedicated components that are optimized for the specific use case and that leverage the hardware and core technologies across both the file and block implementations. This picture illustrates a unified storage product with scalable Data Movers, Control Station (not shown), storage processors, I/O (Input/Output) modules, link control cards (LCC), disk drives, and power supplies. The following slides will cover these components in greater detail.





The Storage Processors are the core of the VNX platform. They deliver the VNX Block components and services, such as Multi-core Cache (MCC) and Multi-core RAID (MCR). The two Storage Processors, SPA and SPB, provide Block data access via UltraFlex I/O (Covered later in this module) technology that supports Fibre Channel, iSCSI, and FCoE protocols, providing access for all external hosts as well as the File hardware of the VNX array.
The VNX storage processors operate in Active/Active mode, in that both controllers are active/on-line and receiving host I/O simultaneously for the backend storage.



The VNX Data Mover X-blade runs the VNX Operating Environment for File, which is optimized to move data between the storage and the IP network. The Data Movers provide highly available file-level access to users and applications via NFS, CIFS, and nPFS protocols. The Data Mover X-Blade stores and accesses data through the Storage Processors. A VNX Data Mover can be configured as a standby Data Mover which serves as a hot spare for up to seven primary or online Data Movers. If a primary Data Mover should go down the standby Data Mover takes its place with little or not disruption of service.



The Control Station is included in all VNX Unified and VNX File storage arrays to provide management and monitoring functions. The Control Station runs a customized Linux kernel. An
optional, second Control Station may be present in some models for redundancy.
The Control Station can be secured by implementing secure user authentication, as well as being isolated on a secure, private network.
The Control Station software is used to install and configure the system, monitor the health of the Primary Data Movers and initiate failover to the Standby Data Movers as required. It is also used to monitor the system’s environmental conditions and performance of all components. The Control Station will also facilitate the call-home and dial-in support features. The Control Station is not in the data path of any File or Block operations.

The power supplies provide the information necessary for Unisphere to monitor and display the ambient temperature and power consumption of the power supply. The power supplies are field replaceble units (FRUs). Each power supply includes two power LEDs and a status LED. The power supplies provide adaptive cooling, in which an array adjusts the power supply fan speeds to spin only as fast as needed to ensure sufficient cooling. The Power Supplies are hotswappable and redundant.
The Standby Power Supply provides battery power to DPE and power to the DAE and SPE on some VNX models. The VNX x uses a DPE which has the SPS built within the VNX SP (Battery On Board) except for the VNX8000 (SPE based) which uses independent SPSs.
The VNX series platform can support up to two standby power supplies or dual SPSs. DC power supplies are available for models VNX5200, VNX5400, and VNX5600. This allows for the  nstallation of these models in network telecommunication facilities or central offices. DC to AC inverters are available for use with VNX Control Stations. This makes these models NEBS
(Network Equipment Building System) compliant.


VNX Series uses I/O Modules in various combinations for front and back-end connectivity. Each
I/O module is protocol independent and hot swappable. Options for block I/O include Fibre Channel, Fibre Channel over Ethernet (FCoE) and iSCSI. Options for file I/O include both 1 Gb/s and 10 Gb/s Ethernet with either copper or optical connections.


VNX uses enclosures to hold major and supporting components, such as power supplies, of VNX, and to provide hot-pluggable connectivity into the system. The possible enclosures are the Storage Processor Enclosure (SPE), Disk Processor Enclosure (DPE), Disk Array Enclosure (DAE), and Data Mover Enclosure (DME).
The enclosures present in the system depend on the VNX configuration type: Unified, Block, or File. The example shown illustrates a VNX Unified configuration. Depending on the model, disks may, or may not, be enclosed with the SPs. When no disks are enclosed with the SPs, an SPE is used. If disks are enclosed with the SPs, a DPE is used. DAEs do not contain any SPs, rather they hold disks, and their supporting hardware, only. Different size DAEs are used depending on the disk form factor and quantity. In Unified and File configurations DMEs are used to contain the Data Mover X-Blades along with their supporting components.





3. Features and Capabolities
VNX with MCx architecture EMC has introduced the first phase of an active/active access model. Symmetric Active/Active allows clients to access a Classic LUN (not supported on pool LUNs) simultaneously through both SPs for improved reliability, ease-of management and improved performance. Since all paths are active, there is no need for the storage processor to "trespass" to gain access to the LUN “owned” by the other storage processor on a path failure, eliminating application timeouts. The same is true for an SP failure, as the SP merely picks up all of the I/Os from the host through the alternate “optimized” path. This capability enables additional LUN performance, as there is more backend bandwidth that can be served by a single SP or FE port.



EMC VNX FailSafe Network or “FSN” provides high availability in the event of an Ethernet switch failure by connecting the two components of the FSN to separate switches. Unlike EtherChannel and LACP, FSN can maintain full bandwidth when failed over, given the same bandwidth on both the active and passive configurations. They do not require any special switch configuration. FailSafe Networks are configured as sets of ports, FastEtherChannels, Link Aggregations, or combinations of these.
Only one connection in an FSN is active at a time. If the FailSafe device detects that the active connection has failed, the Data Mover automatically switches to the surviving partner with the same identity as the failed connection




VNX Virtual Provisioning improves storage capacity utilization by only allocating storage as needed. File systems as well as LUNs can be logically sized to required capacities, and physically provisioned with less capacity. This means storage need not sit idle in a file system or LUN until it is used.
VNX Virtual Provisioning safeguards allow users to keep a track of thinly provisioned file systems and LUNs. By reporting on actual physical usage, total logical size, and available capacity, administrators can both predict and set alerts to avoid running out of physical capacity
VNX with MCx architecture platforms, support Block-level deduplication.
Deduplication of data happens at an 8KB block granularity and requires an 8KB mapping. It can be set at the pool LUN level and Thin, Thick and Deduplicated LUNs can be stored in a single Pool. The deduplication domain is the pool itself; deduplication does not take place across pools. Deduplication occurs out-of-band and is throttled to minimize host I/O impact. The Block Compression feature provides a further reduction of capacity savings initially provided by Virtual Provisioning (compression requires Thin LUNs).
The Block Compression feature uses standard data compression algorithms to reduce space allocated to LUNs. Block compression may be applied to Classic LUNs or Pool LUNs. Block Compression operates in the background, while the LUNs are available for host access. All compression and decompression processes are handled by VNX, so no server cycles are consumed in the process, and no additional server software is required. When sufficient new data is written to a compressed LUN, the system will automatically attempt to compress the uncompressed data in the background.

VNX also supports file level deduplication and compression. VNX for File performs all deduplication processing as a background, asynchronous operation that acts on file data after it has been written into the file system. It does not process active data or data as it is written into the file system.
Deduplication activity can be throttled to avoid impact on processes serving client I/Os. Once candidate files have been identified for deduplication, two activities take place:

  • Compression: Compression is accomplished by using components similar to those used inVNX for block LUN compression
  • Deduplication: File-level deduplication or single instancing is accomplished by using components of another EMC product called Avamar which duplicates file identification algorithms (file identification for single instance is accomplished using a hashing algorithmfrom Avamar).



VNX storage systems support an optional FAST Cache consisting of a storage pool of Flash disks configured to function as FAST Cache. This cache provides low latency and high I/O performance without requiring a large number of Flash disks. It is also expandable while I/O to and from the storage system is occurring. The FAST Cache is based on the locality of reference of the data set. By promoting the data set to the FAST Cache, the storage system services any subsequent requests for this data faster from the Flash disks that make up the FAST Cache;
thus, reducing the load on the disks in the LUNs that contain the data (the underlying disks). FAST Cache consists of one or more pairs of mirrored disks (RAID 1) and provides both read and write caching. For reads, the FAST Cache driver copies data off the disks being accessed into the FAST Cache. For writes, FAST Cache effectively buffers the data waiting to be written to disk. In both cases, the workload is off-loaded from slow rotating disks to the faster Flash disks in FAST Cache. The performance boost provided by FAST Cache varies with the workload and the cache size.

VNX FAST VP tracks data in a Pool at a granularity of 256 MB – a slice – and ranks slices according to their level of activity and how recent that activity took place. Slices that are heavily and frequently accessed are moved to the highest tier of storage, typically Flash drives, while the data that is accessed least are moved to lower performing, but higher capacity storage – typically NL-SAS drives. This sub-LUN granularity makes the process more efficient, and enhances the benefit achieved from the addition of Flash drives.
The ranking process is automatic, and requires no user intervention. Relocation of slices occurs according to a schedule which is user-configurable, but which defaults to a daily relocation. Users can also start a manual relocation if desired.




4. Management
EMC Unisphere for VNX provides a flexible, integrated experience for managing VNX storage systems. Unisphere’s wizards help the user to provision and manage the storage while automatically implementing best practices for the configuration.
Unisphere is completely web-enabled for remote management of the storage environment. Unisphere Management Server runs on the SPs and the Control Station. Administrative users must authenticate to the VNX when using Unisphere. The VNX provides flexible options for administrative user accounts. For deployments where the VNX will be administered by multiple people, the VNX offers the ability for creating multiple unique administrative accounts. Different administrative roles can be defined for the user accounts to distribute different administrative tasks for the users

Administration of the VNX system can be performed with two Command Line Interfaces (CLIs).
Administrative users must authenticate to the VNX when using CLI interfaces. Block enabled
systems have a host‐based Secure CLI software option called Navisphere Secure CLI. Navisphere
Secure CLI is a client application that allows simple operations on the EMC VNX Series platform and
some other legacy storage systems. The CLI can be used to automate management functions
through shell scripts and batch files.
File enabled VNX systems use a command line interface to the Control Station for file
administrative tasks. If VNX for File or Unified is present, then it can be connected to it via serial or
SSH to troubleshoot many VNX for File hardware components


Unisphere Central is a network application that enables administrators to remotely monitor multiple VNX systems whether they reside in a data center or are deployed in remote and branch offices. It provides administrators the ability to monitor the health, trends and alerts of large numbers of VNX systems from a central console. Unisphere Central is a virtual appliance that runs in a VMware virtual environment.
Users continue to leverage the Unisphere element manager for provisioning and management of their VNX systems, whereas Unisphere Central provides a quick and easy way to monitor all their deployed VNX systems.


 
Unisphere Analyzer is the VNX performance analysis tool. Unisphere Analyzer is an application to help identify bottlenecks and hotspots in VNX storage systems, and enable users to evaluate and fine-tune the performance of their VNX system.
Data may be collected by the storage system, or by a Windows host, running the appropriate oftware, in the environment. Different performance metrics are collected from disks, Storage Processors, LUNs, cache, and SnapView snapshot sessions. Data may be displayed in real-time, or for later analysis, saved as a .nar (Unisphere Archive) file.
For File based detailed monitoring, users have access to the native monitoring capability of Unisphere for File and additional monitoring is available for the VNX for File components in the
separately licensed Data Protection Advisor for File Server offering that provides detailed
reporting not only on replication configurations but VNX for File in general


Unisphere Quality of Service Manager (UQM) measures, monitors, and controls application
performance on the VNX storage system. UQM can be a powerful tool for evaluating the
storage system to determine the current service levels and to provide guidance on what
service levels are possible, given the specific environment.
UQM may be managed by Unisphere GUI, Secure CLI, or Unisphere Client.
Because UQM is array resident, there is no host component to load, and no performance
impact on the host.
UQM controls array performance by allocating resources to user-defined classes of I/O. This
resource allocation allows the specified I/O classes to meet pre-defined performance goals


VNX Monitoring and Reporting is a cost effective software solution. Accessible from a web
portal it has a tree view format that drills down into several summary or filtered views. VNX
Monitoring and Reporting is a limited version of Watch4Net for VNX. It provides basic
monitoring and reporting capabilities for VNX administrators. VNX Monitoring and Reporting
automatically collects block and file storage statistics along with configuration data, and stores
them into a database that can be viewed from dashboards and reports. Watch4Net offers a
custom report development framework extending the preconfigured VNX reports and creates
new reports to meet specific reporting needs. It provides real-time, historical and projected
visibility into network, data center, storage and cloud infrastructure performance. Users who
have more complex environments, or more complex needs, can use Watch4net or upgrade
VNX Monitoring and Reporting to the full version of Watch4net.










6 comments:

  1. Thank you, very interesting

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. Hello, an amazing Information dude. Thanks for sharing this nice information with us. File I/O Monitor

    ReplyDelete
  4. Discover Technology: Emc Vnx2 - Architecture Overview >>>>> Download Now

    >>>>> Download Full

    Discover Technology: Emc Vnx2 - Architecture Overview >>>>> Download LINK

    >>>>> Download Now

    Discover Technology: Emc Vnx2 - Architecture Overview >>>>> Download Full

    >>>>> Download LINK

    ReplyDelete
  5. Discover Technology: Emc Vnx2 - Architecture Overview >>>>> Download Now

    >>>>> Download Full

    Discover Technology: Emc Vnx2 - Architecture Overview >>>>> Download LINK

    >>>>> Download Now

    Discover Technology: Emc Vnx2 - Architecture Overview >>>>> Download Full

    >>>>> Download LINK IY

    ReplyDelete