1. Usefull Link
2. Overview
VNX SnapView snapshots are a view of the data and not the actual data itself. As a result, creating a Snapshot and starting a session is a very quick process, requiring only a few seconds. The view that is then presented to the secondary host is a frozen copy of the source LUN as the primary host saw it at the time the session was started. The SnapView snapshot is writable by the secondary host, but any changes made to it are discarded if the SnapView snapshot is deactivated.
A SnapView snapshot is a composite of the unchanged data chunks on the source LUN and data chunks on a LUN called “Reserved LUN”. Before chunks of data are written to the source LUN, they are copied to a reserved area in private space, and the memory map is updated with the new location of these chunks. This process is referred to as Copy on First Write. The Copy On First Write Mechanism (COFW) uses pointers to track whether data on the source LUN, or in the Reserved LUN Pool. These pointers are kept in SP memory, which is volatile, and could therefore be lost if the SP should fail or the LUN be trespassed. A SnapView feature designed to prevent this loss of session metadata is persistence for sessions (which stores the pointers on the Reserved LUN(s) for the session). All sessions are automatically persistent and the user cannot turn off persistence.
The SnapView snapshot can be made accessible to a secondary host, but not to the primary host (unless software that allows simultaneous access, like EMC Replication Manager, is used).
A SnapView snapshot is a composite of the unchanged data chunks on the source LUN and data chunks on a LUN called “Reserved LUN”. Before chunks of data are written to the source LUN, they are copied to a reserved area in private space, and the memory map is updated with the new location of these chunks. This process is referred to as Copy on First Write. The Copy On First Write Mechanism (COFW) uses pointers to track whether data on the source LUN, or in the Reserved LUN Pool. These pointers are kept in SP memory, which is volatile, and could therefore be lost if the SP should fail or the LUN be trespassed. A SnapView feature designed to prevent this loss of session metadata is persistence for sessions (which stores the pointers on the Reserved LUN(s) for the session). All sessions are automatically persistent and the user cannot turn off persistence.
The SnapView snapshot can be made accessible to a secondary host, but not to the primary host (unless software that allows simultaneous access, like EMC Replication Manager, is used).
A VNX is required hardware for using SnapView snapshots. If a host is to access the SnapView snapshot, two or more hosts are required; one primary host to access the VNX source LUN and one or more additional secondary hosts to access the SnapView snapshot of the source LUN.
The Admsnap program runs on host system in conjunction with SnapView running on the EMC VNX storage processors (SPs), and allows the user to start, activate, deactivate, and stop SnapView sessions. The Admsnap utility is an executable program (Command line) that the user can run interactively or with a script. This utility ships with the SnapView enabler.
The Admsnap program runs on host system in conjunction with SnapView running on the EMC VNX storage processors (SPs), and allows the user to start, activate, deactivate, and stop SnapView sessions. The Admsnap utility is an executable program (Command line) that the user can run interactively or with a script. This utility ships with the SnapView enabler.
The source LUN, the SnapView session, and the reserved LUN work together to create the SnapView snapshot. The SnapView snapshot is made visible to a secondary host when is activated to a SnapView session (given that a SnapView snapshot has been defined and has been added to a storage group connected to the secondary host).
The copy on first write mechanism (COFW) involves saving an original data area from the source LUN into a special save area (Reserved LUN) when that data area on the active LUN is about to be changed. The official term for that data area is a ‘chunk’ – a 64 KB piece of contiguous data.
The chunk is saved only once per session. This ensures that the view of the LUN is consistent and, unless writes are made to the SnapView snapshot, is always a true indication of what the LUN looked like at the time it was snapped (the session was started).
Saving only chunks that have been changed allows efficient use of the disk space available – whereas a full copy of the LUN would use additional space equal in size to the active LUN, a SnapView snapshot may use as little as 20% of the space, on average. This depends greatly, of course, on how long the SnapView snapshot needs to be available and how frequently data changes on the source LUN and the SnapView snapshot.
Saving original blocks also means that SnapView can roll a source LUN back to a previous point in time. This may be useful for fast recovery from corruption of the source LUN.
The chunk is saved only once per session. This ensures that the view of the LUN is consistent and, unless writes are made to the SnapView snapshot, is always a true indication of what the LUN looked like at the time it was snapped (the session was started).
Saving only chunks that have been changed allows efficient use of the disk space available – whereas a full copy of the LUN would use additional space equal in size to the active LUN, a SnapView snapshot may use as little as 20% of the space, on average. This depends greatly, of course, on how long the SnapView snapshot needs to be available and how frequently data changes on the source LUN and the SnapView snapshot.
Saving original blocks also means that SnapView can roll a source LUN back to a previous point in time. This may be useful for fast recovery from corruption of the source LUN.
Due to the dynamic nature of reserved LUN assignment per source LUN, it may be better to have many smaller LUNs that can be used as a pool of individual resources. A limiting factor here is that the total number of reserved LUNs allowed varies by storage system model.
- Each reserved LUN can be a different size, and allocation to source LUNs is based on which is the next available reserved LUN, without regard to size. This means that there is no mechanism to ensure that a specified reserved LUN will be allocated to a specified source LUN. Because of the dynamic nature of the SnapView environment, assignment may be regarded as a random event (though, in fact, there are rules governing the assignment of reserved LUNs).
- The Reserved LUN Pool can be configured with thick pool LUNs or classic LUNs only. Pool LUNs that are created as Thin LUNs cannot be used in the RLP.
- The combination of these factors makes the sizing of the reserved LUN pool a non-trivial task – particularly when Incremental SAN Copy and MirrorView/A are used along with Snapshots.
- It is expected that 10% of the data on the source LUN changes while the session is active. Creating 2 RLs per Source LUN allows for a safety margin - it allows twice the expected size, for a total of 20%.
- This example shows a total of 160 GB to be snapped, with eight reserved LUNs totaling 32 GB.
- The user may obtain reserved LUN pool statistics by viewing the properties of the reserved LUN pool. At-a-glance information presented includes the amount of space used and the amount of free space. Warnings about LUN pool usage are available in the SP Event Log and may also be available elsewhere, if configured by the user. Some performance related information, of the type available from Unisphere Analyzer, may be accessed from the GUI.
- Once the LUN Pool has been populated, a next step is to flag all required LUNs as Snapshot source LUNs. This is performed by creating snapshots of those LUNs, or starting sessions on those LUNs, which changes the LUN state from a regular LUN to a Snapshot Source LUN. The changed state is stored on the PSM LUN, so that it survives power cycles. Destruction of the SnapView snapshots and sessions associated with a source LUN return it to ‘standard’ LUN status and that change is also recorded on the PSM LUN. Here the source LUN returns to the regular LUN state.
Note that the calculation shown here is a compromise. Different results are obtained if the goal is to minimize the number of reserved LUNs or to minimize the wasted space in the reserved LUN pool. Also note that the Snapshot Wizard creates larger reserved LUNs by default. It is dealing with a potentially less-experienced user and leaves more overhead for safety.
Here we see the different host I/O types that may be directed at a Snapshot. Note that if there is no active session on the Snapshot, it appears off-line to the host, and the host operating system raises an error if an attempt is made to access it.
If a session is active, the SnapView driver needs to perform additional processing. Reads may require data to be read from the reserved LUN pool or the source LUN, and the driver needs to consult the memory map to determine where the data chunks are located and retrieve them.
Writes to a Snapshot are always directed at the reserved LUN pool because the secondary host has no access to the source LUN. The driver needs to determine whether the data chunk is already in the Reserved LUN Pool or not. If it is, the write proceeds. If it is not, the chunk is copied to the reserved LUN pool and the write performed on that chunk. The chunk is copied into SP memory, the write data merged with the chunk, and the modified chunk written to the reserved LUN Pool
If a session is active, the SnapView driver needs to perform additional processing. Reads may require data to be read from the reserved LUN pool or the source LUN, and the driver needs to consult the memory map to determine where the data chunks are located and retrieve them.
Writes to a Snapshot are always directed at the reserved LUN pool because the secondary host has no access to the source LUN. The driver needs to determine whether the data chunk is already in the Reserved LUN Pool or not. If it is, the write proceeds. If it is not, the chunk is copied to the reserved LUN pool and the write performed on that chunk. The chunk is copied into SP memory, the write data merged with the chunk, and the modified chunk written to the reserved LUN Pool
use SnapView snapshots, the user must create a Reserved LUN Pool. This LUN pool needs enough available space to hold all the original chunks on the source LUN that are likely to change while the session is active.
When the user starts the first session on a source LUN, one reserved LUN is assigned (allocated) to that source LUN. If the reserved LUN becomes full during the time this session is running, the next available reserved LUN will be assigned automatically to the source LUN. When the session is started, the COFW mechanism is enabled and the SnapView starts tracking the source LUN.
Creating the SnapView snapshot enables the allocation of an offline device (Virtual LUN) to a storage group. Following the slide the user creates the SnapView snapshot that will be offline until activated to a running SnapView session.
Snapshot of Source Lun is added to the Storage Group of Server B the device is still offline (Not Ready) as the SnapView Snapshot is not activated yet.
When the user starts the first session on a source LUN, one reserved LUN is assigned (allocated) to that source LUN. If the reserved LUN becomes full during the time this session is running, the next available reserved LUN will be assigned automatically to the source LUN. When the session is started, the COFW mechanism is enabled and the SnapView starts tracking the source LUN.
Creating the SnapView snapshot enables the allocation of an offline device (Virtual LUN) to a storage group. Following the slide the user creates the SnapView snapshot that will be offline until activated to a running SnapView session.
Snapshot of Source Lun is added to the Storage Group of Server B the device is still offline (Not Ready) as the SnapView Snapshot is not activated yet.
Once the SnapView session and the SnapView snapshot are created for a given source LUN, the user can activate the SnapView snapshot. This action essentially associates a Snapview snapshot to the point-in-time view provided by the SnapView session. If the SnapView snapshot is already in a storage group and allocated to a host, following activation the connected host should be able to see this point-in-time copy of source LUN data after a bus rescan at the host level.
If the secondary host requests a read, SnapView first determines whether the required data is on the source LUN (i.e. has not been modified since the session started), or in the reserved LUN Pool, and fetches it from the relevant location
COFW process, invoked when a host changes a source LUN chunk for the first time. The original chunk is copied to the reserved LUN Pool.
After the copy of the original chunk to the reserved LUN Pool, pointers are updated to indicate that the chunk is now present in the reserved LUN Pool. The map in SP memory, and the map on disk (remember that all sessions are persistent), is also updated.
Note that once a chunk has been copied to the reserved LUN Pool, further changes made to that chunk on the source LUN (for the specific session) do not initiate any COFW operations for that session
After the copy of the original chunk to the reserved LUN Pool, pointers are updated to indicate that the chunk is now present in the reserved LUN Pool. The map in SP memory, and the map on disk (remember that all sessions are persistent), is also updated.
Note that once a chunk has been copied to the reserved LUN Pool, further changes made to that chunk on the source LUN (for the specific session) do not initiate any COFW operations for that session
SnapView Snapshots are writeable by the secondary host. This slide shows the write operation from Server B. The write operation addresses a non-virgin chunk; no data for that chunk is in the reserved LUN Pool. SnapView copies the chunk from the source LUN to the reserved LUN Pool in an operation which may be thought of as a Copy on First Write. The copy of the data visible to the Server B(the copy in the RLP) is then modified by the write
EMC VNX VNX2 Technical Update - Deep Dive
Overview LinkVirtual VNX: Overview, Architecture & Use Cases
VNX Overview
Vnx series-technical-review
EMC VNX Unified Best Practices For Performance:Applied Best Practices Guide
NAS Meets SAN – an intro to VNX Unified
EMC VNX Monitoring and Reporting
No comments:
Post a Comment