May 19, 2015

Hand-on - EMC VNX2 - Block Deduplication

In an attempt to save additional space, you decide the best feature to use is Block Deduplication. This is a storage systembased capacity efficiency offering for VNX2 systems that removes duplicate 8KB blocks, leaving only a single copy of this data. You will enable deduplication on the storage you provisioned to the marketing team to help save space.

Navigate to the LUNs Page




View LUN Properties



Turn On Deduplication

Confirm





Deduplication Summary

  • Now that you have enabled deduplication on the Social Media Daily Reports LUN, data stored on the LUN in the future will be able to be deduplicated which will save space depending on how many duplicate data blocks are present.
  • Note: Since there is no data in the Social Media Daily Reports LUN and deduplication can take time to complete, for the purposes of this demonstration you will view deduplication savings on another LUN. The Marketing - Virtual Machines LUN has already been deduplicated as an example
  • This 70GB LUN was filled with Virtual Machine .vmdk files which contain a lot of duplicate data, making them good candidates for deduplication. Since deduplication is run on all deduplication enabled LUNs in a pool, the savings are viewed at the pool level.

Click Deduplication Summary on the right


View Savings
On this page, you can view the savings from deduplication by storage pool. Because the Marketing - Virtual Machines LUN is on the Marketing storage pool, you will be viewing the savings on this pool.
You can see that the savings are high in this case, due to the duplicate data contained within the  .vmdk files. Deduplication savings will vary based on the data contained within the pool.
Block Deduplication works best under the following circumstances:

  • • LUNs contain a large amount of duplicate data
  • • LUNs contain a large amount of static data
  • • LUNs that experience less than 30% writes
  • • LUNs that do not experience Sequential or Large Block random I/O
  • • LUNs that contain latency insensitive applications

Hand-on - EMC VNX2 - Hot Spare Policy

Hot Spare Policy in Unisphere. 
Because any suitable unbound drive can now be used as a permanent spare, Hot Spare Policies are used to monitor the number of unbound drives. This is intended to prevent unwanted situations where no hot spares are unintentionally left on the array.

1. Hover over System
2. Click Hot Spare Policy





Note the Recommended Hot Spare Policy is 1 per 30 disks.
Click NL SAS. The Hot Spare Policy for the NL SAS drives has already been changed to No Hot Spares.
Note that this does not mean there will be no hot spares for NL SAS drives. Any unbound drive will still be used as a hot spare in the event of a drive failure. No Hot Spares only means the Hot Spare Policy will not reserve any drives to be a hot spare and you will not be warned if all of the drives are bound






Change the Policy


Change the Policy - 1 per 10
After doing this, you see that the number of disks to keep unused changes to 2 disks. This is because you set the policy to keep 1 out of every 5 drives as a hot spare. Since you have 10 SAS Flash drives total, the policy reserves 2 drives.
Even though Joe wants to have enough hot spares to be extra safe when the new drives arrive, keeping 1 out of every 5 drives for a hot spare is wasting a lot of drives. You decide to change the policy to 1 per 10 instead. Use the drop down menu to change the policy to 1 per 10.




Apply Changes
Notice that the number of disks to keep unused changes to 1 disks. This number will increase depending on how many drives Joe ordered to be added to the system. If even 1 new drive is added, the number to reserve for hot spares will increase to 2, which meets Joe's requirement of having at least 2 spare Flash drives available at any time.




Hand-on - EMC VNX2 - Multicore Cache

VNX2, Multicore Cache and Multicore FAST Cache. The first of these is Multicore Cache. The Multicore Cache feature provided in VNX2 uses the System Memory for Cache. It introduces the following changes:
  •  Completely multicore scalable
  •  Dynamic cache that automatically allocates read/write cache
  •  High/Low Watermarks have been removed
  •  Cache page size is now locked at 8KB
  •  Flushing has been changed from forced to predicted
  •  Write cache can be enabled/disabled at the array or Classic LUN level
  •  Proactively cleans pages during times of low activity, leaving clean pages in cache





  • Note that the High/Low Watermarks, write cache size, and page size options have been removed. The system will dynamically allocate cache for reads/writes as necessary and the cache now has a fixed page size of 8KB. You can also see the Read/Write Cache Hit Ratios and Dirty Pages (MB) here.
  • Note that the read cache is always enabled, and write cache is enabled by default. Your numbers may be different than those shown below.


     FAST Cache
     Multicore FAST Cache serves as a high-capacity secondary cache to Multicore Cache you viewed in the SP Cac he tab. Flash drives positioned between the storage processor’s DRAM-based primary cache and hard disk drives form this cache and provide improved access times and higher I/O rates. The Flash drives no longer need to be dedicated to specific applications.
    FAST Cache provides a much larger, scalable cache by using Flash drives that provides very large capacities as compared to DRAM. VNX2


  • 1. Ensure that RAID Type is set to 1, Number of Disks is set to 2, and Automatic is selected

    2. Click OK to create FAST Cache


     

EMC VNX VNX2 Technical Update - Deep Dive
Overview Link

Virtual VNX: Overview, Architecture & Use Cases

VNX Overview

Vnx series-technical-review

EMC VNX Unified Best Practices For Performance:Applied Best Practices Guide

NAS Meets SAN – an intro to VNX Unified

EMC VNX Monitoring and Reporting

May 16, 2015

Oracle Server - SPARC M10 - Install Solaris

Unless you intend using the pre-installed disk image on each PPAR there are several possible methods for installing the PPARs including:
  • Install each PPAR from Solaris DVD using a USB local drive.The drive must be connected to the front of a BB – see MOS Document ID 1553734.1.
  • Install Solaris 10 PPAR using an install server (e.g. via JET). This requires the availability of an install server provided by customer or an engineer's laptop (in those countries where laptops are used as install servers AND the Customer permits introduction of a laptop into the data center).
  • Install Solaris 11 PPAR using public or local Solaris repository.
Oacle-fujitsu-m10-server-features-and-capabilities

Oracle Sparc m10 series-servers_introduction_and_overview

Oracle Sparc m10 series-servers_installation

Oracle Sparc m10 series-servers _architecture

Other topic


1.  Pre-installed disk image
Standard installations carried out by manufacturing are as follows:
  •  Solaris 11.1 will have been loaded.
  •  ZFS root file system will be default.
  •  The on-board RAID (if available) will not be active.
  •  The system will have been patched and configured according to EIS methodology
  •  Before leaving the factory, the system will have been “sys-unconfig'd” and the SPset back to defaults.
If you have set up a RAID configuration for the boot disk the pre-installed image
will be lost.
2. Booting from an External DVD

The below example is from a M10-4S with 4 building blocks and the external dvd is being attached to BB#3 which is assigned to LSB03.

  •     Plug the external DVD player into the USB port on the front of the M10 machine.
  •     Power the DVD player on.
  •     Power on the server, or type 'reset-all'  at the OBP  prompt
  •     For this example the DVD is connected to the front USB port of BB#03 which is LSB03.  Look at the 'show-devs' output and look for a line similar to this:
            /pci@9800/pci@4/pci@0/pci@1/pci@0/usb@4,1/hub@2
    (See Doc Id. 1543194.1 for more M10 device path information)

If you don't see the string "disk" at the end, the DVD play is not properly attached and powered on.
Correct:
/pci@9800/pci@4/pci@0/pci@1/pci@0/usb@4,1/hub@2/storage@1/disk
DVD Not connected:
/pci@9800/pci@4/pci@0/pci@1/pci@0/usb@4,1/hub@2/storage@1

Then to boot type:
boot /pci@9800/pci@4/pci@0/pci@1/pci@0/usb@4,1/hub@2/storage@1/disk

VMware Horizon™ with View™ - Architecture



View Connection Server
  • View Connection Server streamlines the management, provisioning, and deployment of virtual desktops. As an administrator, you can centrally manage thousands of virtual desktops from a single console. End users connect through View Connection Server to securely and easily access their personalized virtual desktops. View Connection Server acts as a broker for client connections by authenticating and directing incoming user desktop requests.
View Security Server
  • A View security server is an instance of View Connection Server that adds an additional layer of security between the Internet and your internal network. Outside the corporate firewall, in the DMZ, you can install and configure View Connection Server as a View security server. Security servers in the DMZ communicate with View Connection Servers inside the corporate firewall. Security servers ensure that the only remote desktop traffic that can enter the corporate data center is traffic on behalf of a strongly authenticated user. Users can only access the desktop resources for which they are authorized.
View Composer Server
  • View Composer Server is an optional service that enables you to manage pools of “like” desktops, called linkedclone desktops, by creating master images that share a common virtual disk. Linked-clone desktop images are one or more copies of a parent virtual machine that share the virtual disks of the parent, but which operate as individual virtual machines. Linked-clone desktop images can optimize your use of storage space and facilitate updates. You can make changes to a single master image through the vSphere Client. These changes trigger View Composer Server to apply the updates to all cloned user desktops that are linked to that master image, without affecting users’ settings or persona data.

1. View Components
  • View Connection Servers
    • Administrative console for View environment
    • Manages user entitlements to desktop and application pools
      • Integrates with Active Directory
        • Works with View Agent for user session management and Single Sign-On
    • Application entitlement and distribution
      • ThinApp repository on Windows file and print server
      • RDS application pool publishing
    • Works with vCenter Server
      • Provisions virtual machines from templates as needed for desktop resource pools










Citrix XenDesktop vs VMware Horizon View


Compare - Sizing Guide for Windows 7

Compare - Dispelling Citrix Myths

Compare - Four reasons to go beyond VMware Horizon with Citrix XenApp and XenDesktop

Compare - VMware Horizon 6 Advantages Over Citrix XenDesktop

Compare - Why Choose VMware® Horizon View™ Over Citrix XenDesktop

Compare - Why Choose VMware® Horizon View™ over Microsoft RDS 2012

Compare - XenDesktop for Microsoft VDI

Compare - VMware Horizon View 5.2 and Citrix XenDesktop 7

Citrix XenServer vs VMware vs Redhat

Compare - VMware vSphere 4.1 Features and Other

Compare - Vmware vSphere 5 - XenServer 6 - Hyper-V 2008

Compare - VMware vSphere 5 Features and Other

Compare - Citrix XenServer 6.2 And VMware vSphere 5.1

Compare - Technical and commercial comparison of Citrix XenServer and VMware

Compare - XenServer 6.1 vs. RedHat EV 3.0